00:00:00.001 Started by upstream project "autotest-per-patch" build number 132751 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.020 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.021 The recommended git tool is: git 00:00:00.021 using credential 00000000-0000-0000-0000-000000000002 00:00:00.024 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.039 Fetching changes from the remote Git repository 00:00:00.046 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.070 Using shallow fetch with depth 1 00:00:00.070 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.070 > git --version # timeout=10 00:00:00.101 > git --version # 'git version 2.39.2' 00:00:00.101 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.150 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.150 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.694 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.709 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.722 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.722 > git config core.sparsecheckout # timeout=10 00:00:02.736 > git read-tree -mu HEAD # timeout=10 00:00:02.755 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.778 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.778 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.875 [Pipeline] Start of Pipeline 00:00:02.889 [Pipeline] library 00:00:02.891 Loading library shm_lib@master 00:00:02.891 Library shm_lib@master is cached. Copying from home. 00:00:02.909 [Pipeline] node 00:00:02.934 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.936 [Pipeline] { 00:00:02.946 [Pipeline] catchError 00:00:02.947 [Pipeline] { 00:00:02.958 [Pipeline] wrap 00:00:02.966 [Pipeline] { 00:00:02.974 [Pipeline] stage 00:00:02.976 [Pipeline] { (Prologue) 00:00:02.994 [Pipeline] echo 00:00:02.995 Node: VM-host-WFP7 00:00:03.001 [Pipeline] cleanWs 00:00:03.009 [WS-CLEANUP] Deleting project workspace... 00:00:03.009 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.014 [WS-CLEANUP] done 00:00:03.204 [Pipeline] setCustomBuildProperty 00:00:03.297 [Pipeline] httpRequest 00:00:03.633 [Pipeline] echo 00:00:03.634 Sorcerer 10.211.164.101 is alive 00:00:03.641 [Pipeline] retry 00:00:03.643 [Pipeline] { 00:00:03.655 [Pipeline] httpRequest 00:00:03.658 HttpMethod: GET 00:00:03.659 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.659 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.661 Response Code: HTTP/1.1 200 OK 00:00:03.661 Success: Status code 200 is in the accepted range: 200,404 00:00:03.661 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.807 [Pipeline] } 00:00:03.820 [Pipeline] // retry 00:00:03.827 [Pipeline] sh 00:00:04.105 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.116 [Pipeline] httpRequest 00:00:04.638 [Pipeline] echo 00:00:04.640 Sorcerer 10.211.164.101 is alive 00:00:04.648 [Pipeline] retry 00:00:04.649 [Pipeline] { 00:00:04.662 [Pipeline] httpRequest 00:00:04.666 HttpMethod: GET 00:00:04.666 URL: http://10.211.164.101/packages/spdk_0ea9ac02fc70bace95fb4d1fef30cb4a754f5183.tar.gz 00:00:04.666 Sending request to url: http://10.211.164.101/packages/spdk_0ea9ac02fc70bace95fb4d1fef30cb4a754f5183.tar.gz 00:00:04.667 Response Code: HTTP/1.1 200 OK 00:00:04.668 Success: Status code 200 is in the accepted range: 200,404 00:00:04.668 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_0ea9ac02fc70bace95fb4d1fef30cb4a754f5183.tar.gz 00:00:23.966 [Pipeline] } 00:00:23.984 [Pipeline] // retry 00:00:23.991 [Pipeline] sh 00:00:24.272 + tar --no-same-owner -xf spdk_0ea9ac02fc70bace95fb4d1fef30cb4a754f5183.tar.gz 00:00:27.594 [Pipeline] sh 00:00:27.874 + git -C spdk log --oneline -n5 00:00:27.874 0ea9ac02f accel/mlx5: Create pool of UMRs 00:00:27.875 60adca7e1 lib/mlx5: API to configure UMR 00:00:27.875 c2471e450 nvmf: Clean unassociated_qpairs on connect error 00:00:27.875 5469bd2d1 nvmf/rdma: Fix destroy of uninitialized qpair 00:00:27.875 c7acbd6be test/iscsi_tgt: Remove support for the namespace arg 00:00:27.892 [Pipeline] writeFile 00:00:27.908 [Pipeline] sh 00:00:28.188 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:28.198 [Pipeline] sh 00:00:28.476 + cat autorun-spdk.conf 00:00:28.476 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.476 SPDK_RUN_ASAN=1 00:00:28.476 SPDK_RUN_UBSAN=1 00:00:28.476 SPDK_TEST_RAID=1 00:00:28.476 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.482 RUN_NIGHTLY=0 00:00:28.484 [Pipeline] } 00:00:28.496 [Pipeline] // stage 00:00:28.509 [Pipeline] stage 00:00:28.511 [Pipeline] { (Run VM) 00:00:28.524 [Pipeline] sh 00:00:28.802 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:28.802 + echo 'Start stage prepare_nvme.sh' 00:00:28.802 Start stage prepare_nvme.sh 00:00:28.802 + [[ -n 4 ]] 00:00:28.802 + disk_prefix=ex4 00:00:28.802 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:28.802 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:28.802 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:28.802 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.802 ++ SPDK_RUN_ASAN=1 00:00:28.802 ++ SPDK_RUN_UBSAN=1 00:00:28.802 ++ SPDK_TEST_RAID=1 00:00:28.802 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.802 ++ RUN_NIGHTLY=0 00:00:28.802 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:28.802 + nvme_files=() 00:00:28.802 + declare -A nvme_files 00:00:28.802 + backend_dir=/var/lib/libvirt/images/backends 00:00:28.802 + nvme_files['nvme.img']=5G 00:00:28.802 + nvme_files['nvme-cmb.img']=5G 00:00:28.802 + nvme_files['nvme-multi0.img']=4G 00:00:28.802 + nvme_files['nvme-multi1.img']=4G 00:00:28.802 + nvme_files['nvme-multi2.img']=4G 00:00:28.802 + nvme_files['nvme-openstack.img']=8G 00:00:28.802 + nvme_files['nvme-zns.img']=5G 00:00:28.802 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:28.802 + (( SPDK_TEST_FTL == 1 )) 00:00:28.802 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:28.802 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:28.802 + for nvme in "${!nvme_files[@]}" 00:00:28.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:28.802 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.802 + for nvme in "${!nvme_files[@]}" 00:00:28.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:28.802 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.802 + for nvme in "${!nvme_files[@]}" 00:00:28.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:28.802 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:28.802 + for nvme in "${!nvme_files[@]}" 00:00:28.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:28.802 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.802 + for nvme in "${!nvme_files[@]}" 00:00:28.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:28.802 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.802 + for nvme in "${!nvme_files[@]}" 00:00:28.802 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:29.061 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.061 + for nvme in "${!nvme_files[@]}" 00:00:29.061 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:29.061 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.061 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:29.061 + echo 'End stage prepare_nvme.sh' 00:00:29.061 End stage prepare_nvme.sh 00:00:29.072 [Pipeline] sh 00:00:29.354 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:29.354 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:00:29.354 00:00:29.354 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:29.354 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:29.354 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:29.354 HELP=0 00:00:29.354 DRY_RUN=0 00:00:29.354 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:29.354 NVME_DISKS_TYPE=nvme,nvme, 00:00:29.354 NVME_AUTO_CREATE=0 00:00:29.354 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:29.354 NVME_CMB=,, 00:00:29.354 NVME_PMR=,, 00:00:29.354 NVME_ZNS=,, 00:00:29.354 NVME_MS=,, 00:00:29.354 NVME_FDP=,, 00:00:29.354 SPDK_VAGRANT_DISTRO=fedora39 00:00:29.354 SPDK_VAGRANT_VMCPU=10 00:00:29.354 SPDK_VAGRANT_VMRAM=12288 00:00:29.354 SPDK_VAGRANT_PROVIDER=libvirt 00:00:29.354 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:29.354 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:29.354 SPDK_OPENSTACK_NETWORK=0 00:00:29.354 VAGRANT_PACKAGE_BOX=0 00:00:29.354 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:29.354 FORCE_DISTRO=true 00:00:29.354 VAGRANT_BOX_VERSION= 00:00:29.354 EXTRA_VAGRANTFILES= 00:00:29.354 NIC_MODEL=virtio 00:00:29.354 00:00:29.354 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:29.354 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:31.891 Bringing machine 'default' up with 'libvirt' provider... 00:00:32.460 ==> default: Creating image (snapshot of base box volume). 00:00:32.719 ==> default: Creating domain with the following settings... 00:00:32.719 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733507864_d240bdc55d54eaa8ce88 00:00:32.719 ==> default: -- Domain type: kvm 00:00:32.719 ==> default: -- Cpus: 10 00:00:32.719 ==> default: -- Feature: acpi 00:00:32.719 ==> default: -- Feature: apic 00:00:32.719 ==> default: -- Feature: pae 00:00:32.719 ==> default: -- Memory: 12288M 00:00:32.719 ==> default: -- Memory Backing: hugepages: 00:00:32.719 ==> default: -- Management MAC: 00:00:32.719 ==> default: -- Loader: 00:00:32.719 ==> default: -- Nvram: 00:00:32.719 ==> default: -- Base box: spdk/fedora39 00:00:32.719 ==> default: -- Storage pool: default 00:00:32.719 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733507864_d240bdc55d54eaa8ce88.img (20G) 00:00:32.719 ==> default: -- Volume Cache: default 00:00:32.719 ==> default: -- Kernel: 00:00:32.719 ==> default: -- Initrd: 00:00:32.720 ==> default: -- Graphics Type: vnc 00:00:32.720 ==> default: -- Graphics Port: -1 00:00:32.720 ==> default: -- Graphics IP: 127.0.0.1 00:00:32.720 ==> default: -- Graphics Password: Not defined 00:00:32.720 ==> default: -- Video Type: cirrus 00:00:32.720 ==> default: -- Video VRAM: 9216 00:00:32.720 ==> default: -- Sound Type: 00:00:32.720 ==> default: -- Keymap: en-us 00:00:32.720 ==> default: -- TPM Path: 00:00:32.720 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:32.720 ==> default: -- Command line args: 00:00:32.720 ==> default: -> value=-device, 00:00:32.720 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:32.720 ==> default: -> value=-drive, 00:00:32.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:32.720 ==> default: -> value=-device, 00:00:32.720 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.720 ==> default: -> value=-device, 00:00:32.720 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:32.720 ==> default: -> value=-drive, 00:00:32.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:32.720 ==> default: -> value=-device, 00:00:32.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.720 ==> default: -> value=-drive, 00:00:32.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:32.720 ==> default: -> value=-device, 00:00:32.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.720 ==> default: -> value=-drive, 00:00:32.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:32.720 ==> default: -> value=-device, 00:00:32.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.720 ==> default: Creating shared folders metadata... 00:00:32.720 ==> default: Starting domain. 00:00:34.099 ==> default: Waiting for domain to get an IP address... 00:00:52.196 ==> default: Waiting for SSH to become available... 00:00:52.196 ==> default: Configuring and enabling network interfaces... 00:00:57.498 default: SSH address: 192.168.121.86:22 00:00:57.498 default: SSH username: vagrant 00:00:57.498 default: SSH auth method: private key 00:01:00.031 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:08.165 ==> default: Mounting SSHFS shared folder... 00:01:10.718 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:10.718 ==> default: Checking Mount.. 00:01:12.102 ==> default: Folder Successfully Mounted! 00:01:12.102 ==> default: Running provisioner: file... 00:01:13.482 default: ~/.gitconfig => .gitconfig 00:01:14.049 00:01:14.049 SUCCESS! 00:01:14.049 00:01:14.049 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:14.049 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:14.049 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:14.049 00:01:14.056 [Pipeline] } 00:01:14.066 [Pipeline] // stage 00:01:14.072 [Pipeline] dir 00:01:14.073 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:14.074 [Pipeline] { 00:01:14.082 [Pipeline] catchError 00:01:14.084 [Pipeline] { 00:01:14.091 [Pipeline] sh 00:01:14.370 + vagrant ssh-config --host vagrant 00:01:14.370 + sed -ne /^Host/,$p 00:01:14.370 + tee ssh_conf 00:01:17.670 Host vagrant 00:01:17.670 HostName 192.168.121.86 00:01:17.670 User vagrant 00:01:17.670 Port 22 00:01:17.670 UserKnownHostsFile /dev/null 00:01:17.670 StrictHostKeyChecking no 00:01:17.670 PasswordAuthentication no 00:01:17.670 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.670 IdentitiesOnly yes 00:01:17.670 LogLevel FATAL 00:01:17.670 ForwardAgent yes 00:01:17.670 ForwardX11 yes 00:01:17.670 00:01:17.682 [Pipeline] withEnv 00:01:17.684 [Pipeline] { 00:01:17.695 [Pipeline] sh 00:01:17.981 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.981 source /etc/os-release 00:01:17.981 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.981 # Minimal, systemd-like check. 00:01:17.981 if [[ -e /.dockerenv ]]; then 00:01:17.981 # Clear garbage from the node's name: 00:01:17.981 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.981 # $HOSTNAME is the actual container id 00:01:17.981 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.981 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.981 # We can assume this is a mount from a host where container is running, 00:01:17.981 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.981 container="$(< /etc/hostname) ($agent)" 00:01:17.981 else 00:01:17.981 # Fallback 00:01:17.981 container=$agent 00:01:17.981 fi 00:01:17.981 fi 00:01:17.981 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.981 00:01:18.249 [Pipeline] } 00:01:18.263 [Pipeline] // withEnv 00:01:18.271 [Pipeline] setCustomBuildProperty 00:01:18.282 [Pipeline] stage 00:01:18.283 [Pipeline] { (Tests) 00:01:18.297 [Pipeline] sh 00:01:18.575 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:18.844 [Pipeline] sh 00:01:19.120 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:19.393 [Pipeline] timeout 00:01:19.393 Timeout set to expire in 1 hr 30 min 00:01:19.414 [Pipeline] { 00:01:19.440 [Pipeline] sh 00:01:19.714 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:20.283 HEAD is now at 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:20.295 [Pipeline] sh 00:01:20.574 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.841 [Pipeline] sh 00:01:21.143 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:21.431 [Pipeline] sh 00:01:21.719 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:21.977 ++ readlink -f spdk_repo 00:01:21.977 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:21.977 + [[ -n /home/vagrant/spdk_repo ]] 00:01:21.977 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:21.977 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:21.977 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:21.977 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:21.977 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:21.977 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:21.977 + cd /home/vagrant/spdk_repo 00:01:21.977 + source /etc/os-release 00:01:21.977 ++ NAME='Fedora Linux' 00:01:21.977 ++ VERSION='39 (Cloud Edition)' 00:01:21.977 ++ ID=fedora 00:01:21.977 ++ VERSION_ID=39 00:01:21.977 ++ VERSION_CODENAME= 00:01:21.977 ++ PLATFORM_ID=platform:f39 00:01:21.977 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.977 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.977 ++ LOGO=fedora-logo-icon 00:01:21.977 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.977 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.977 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.977 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.977 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.977 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.977 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.977 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.977 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.977 ++ SUPPORT_END=2024-11-12 00:01:21.977 ++ VARIANT='Cloud Edition' 00:01:21.977 ++ VARIANT_ID=cloud 00:01:21.977 + uname -a 00:01:21.977 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:21.977 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:22.544 Hugepages 00:01:22.544 node hugesize free / total 00:01:22.544 node0 1048576kB 0 / 0 00:01:22.544 node0 2048kB 0 / 0 00:01:22.544 00:01:22.545 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.545 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:22.545 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:22.545 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:22.545 + rm -f /tmp/spdk-ld-path 00:01:22.545 + source autorun-spdk.conf 00:01:22.545 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.545 ++ SPDK_RUN_ASAN=1 00:01:22.545 ++ SPDK_RUN_UBSAN=1 00:01:22.545 ++ SPDK_TEST_RAID=1 00:01:22.545 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.545 ++ RUN_NIGHTLY=0 00:01:22.545 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.545 + [[ -n '' ]] 00:01:22.545 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:22.545 + for M in /var/spdk/build-*-manifest.txt 00:01:22.545 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.545 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.545 + for M in /var/spdk/build-*-manifest.txt 00:01:22.545 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.545 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.545 + for M in /var/spdk/build-*-manifest.txt 00:01:22.545 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.545 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.545 ++ uname 00:01:22.545 + [[ Linux == \L\i\n\u\x ]] 00:01:22.545 + sudo dmesg -T 00:01:22.545 + sudo dmesg --clear 00:01:22.804 + dmesg_pid=5430 00:01:22.804 + sudo dmesg -Tw 00:01:22.804 + [[ Fedora Linux == FreeBSD ]] 00:01:22.804 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.804 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.804 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.804 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.804 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.804 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.804 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.804 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.804 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.804 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.804 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.804 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.804 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.804 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.804 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.804 17:58:34 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:22.804 17:58:34 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.804 17:58:34 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.804 17:58:34 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:22.804 17:58:34 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:22.804 17:58:34 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:22.804 17:58:34 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.804 17:58:34 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:22.804 17:58:34 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:22.804 17:58:34 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.804 17:58:34 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:22.804 17:58:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:22.804 17:58:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.804 17:58:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.804 17:58:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.804 17:58:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.804 17:58:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.804 17:58:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.804 17:58:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.804 17:58:34 -- paths/export.sh@5 -- $ export PATH 00:01:22.804 17:58:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.804 17:58:34 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:22.804 17:58:34 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:22.804 17:58:34 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733507914.XXXXXX 00:01:22.804 17:58:34 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733507914.MkbnPU 00:01:22.804 17:58:34 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:22.804 17:58:34 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:22.804 17:58:34 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:22.804 17:58:34 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:22.804 17:58:34 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.804 17:58:34 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:22.804 17:58:34 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:22.804 17:58:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.804 17:58:34 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:22.804 17:58:34 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:22.804 17:58:34 -- pm/common@17 -- $ local monitor 00:01:22.804 17:58:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.804 17:58:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.804 17:58:34 -- pm/common@25 -- $ sleep 1 00:01:22.804 17:58:34 -- pm/common@21 -- $ date +%s 00:01:22.804 17:58:34 -- pm/common@21 -- $ date +%s 00:01:22.804 17:58:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733507914 00:01:22.804 17:58:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733507914 00:01:23.067 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733507914_collect-cpu-load.pm.log 00:01:23.067 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733507914_collect-vmstat.pm.log 00:01:24.006 17:58:35 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:24.006 17:58:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.006 17:58:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.006 17:58:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:24.006 17:58:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.006 Fri Dec 6 05:58:35 PM UTC 2024 00:01:24.006 17:58:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.006 v25.01-pre-308-g0ea9ac02f 00:01:24.006 17:58:35 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:24.006 17:58:35 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:24.006 17:58:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.006 17:58:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.006 17:58:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.006 ************************************ 00:01:24.006 START TEST asan 00:01:24.006 ************************************ 00:01:24.006 using asan 00:01:24.006 17:58:36 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:24.006 00:01:24.006 real 0m0.000s 00:01:24.006 user 0m0.000s 00:01:24.006 sys 0m0.000s 00:01:24.006 17:58:36 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.006 17:58:36 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.006 ************************************ 00:01:24.006 END TEST asan 00:01:24.006 ************************************ 00:01:24.006 17:58:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.006 17:58:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.006 17:58:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.006 17:58:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.006 17:58:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.006 ************************************ 00:01:24.006 START TEST ubsan 00:01:24.006 ************************************ 00:01:24.006 using ubsan 00:01:24.006 17:58:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:24.006 00:01:24.006 real 0m0.000s 00:01:24.006 user 0m0.000s 00:01:24.006 sys 0m0.000s 00:01:24.006 17:58:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.006 17:58:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.006 ************************************ 00:01:24.006 END TEST ubsan 00:01:24.006 ************************************ 00:01:24.006 17:58:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.006 17:58:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.006 17:58:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.006 17:58:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.006 17:58:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.006 17:58:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.006 17:58:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.006 17:58:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.006 17:58:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:24.265 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:24.265 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:24.832 Using 'verbs' RDMA provider 00:01:40.727 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:58.844 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:58.844 Creating mk/config.mk...done. 00:01:58.844 Creating mk/cc.flags.mk...done. 00:01:58.844 Type 'make' to build. 00:01:58.844 17:59:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:58.844 17:59:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:58.844 17:59:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:58.844 17:59:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.844 ************************************ 00:01:58.844 START TEST make 00:01:58.844 ************************************ 00:01:58.844 17:59:08 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:58.844 make[1]: Nothing to be done for 'all'. 00:02:13.733 The Meson build system 00:02:13.733 Version: 1.5.0 00:02:13.733 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:13.733 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:13.733 Build type: native build 00:02:13.733 Program cat found: YES (/usr/bin/cat) 00:02:13.733 Project name: DPDK 00:02:13.733 Project version: 24.03.0 00:02:13.733 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.733 C linker for the host machine: cc ld.bfd 2.40-14 00:02:13.733 Host machine cpu family: x86_64 00:02:13.733 Host machine cpu: x86_64 00:02:13.733 Message: ## Building in Developer Mode ## 00:02:13.733 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.733 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:13.733 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.733 Program python3 found: YES (/usr/bin/python3) 00:02:13.733 Program cat found: YES (/usr/bin/cat) 00:02:13.733 Compiler for C supports arguments -march=native: YES 00:02:13.733 Checking for size of "void *" : 8 00:02:13.733 Checking for size of "void *" : 8 (cached) 00:02:13.733 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:13.733 Library m found: YES 00:02:13.733 Library numa found: YES 00:02:13.733 Has header "numaif.h" : YES 00:02:13.733 Library fdt found: NO 00:02:13.733 Library execinfo found: NO 00:02:13.733 Has header "execinfo.h" : YES 00:02:13.733 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.733 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.733 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.733 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.733 Run-time dependency openssl found: YES 3.1.1 00:02:13.733 Run-time dependency libpcap found: YES 1.10.4 00:02:13.733 Has header "pcap.h" with dependency libpcap: YES 00:02:13.733 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.733 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.733 Compiler for C supports arguments -Wformat: YES 00:02:13.733 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.733 Compiler for C supports arguments -Wformat-security: NO 00:02:13.733 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.733 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.733 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.733 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.733 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.733 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.733 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.733 Compiler for C supports arguments -Wundef: YES 00:02:13.733 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.733 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.733 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.733 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.733 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.733 Program objdump found: YES (/usr/bin/objdump) 00:02:13.733 Compiler for C supports arguments -mavx512f: YES 00:02:13.733 Checking if "AVX512 checking" compiles: YES 00:02:13.733 Fetching value of define "__SSE4_2__" : 1 00:02:13.733 Fetching value of define "__AES__" : 1 00:02:13.733 Fetching value of define "__AVX__" : 1 00:02:13.733 Fetching value of define "__AVX2__" : 1 00:02:13.733 Fetching value of define "__AVX512BW__" : 1 00:02:13.733 Fetching value of define "__AVX512CD__" : 1 00:02:13.733 Fetching value of define "__AVX512DQ__" : 1 00:02:13.733 Fetching value of define "__AVX512F__" : 1 00:02:13.733 Fetching value of define "__AVX512VL__" : 1 00:02:13.733 Fetching value of define "__PCLMUL__" : 1 00:02:13.733 Fetching value of define "__RDRND__" : 1 00:02:13.733 Fetching value of define "__RDSEED__" : 1 00:02:13.733 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:13.733 Fetching value of define "__znver1__" : (undefined) 00:02:13.733 Fetching value of define "__znver2__" : (undefined) 00:02:13.734 Fetching value of define "__znver3__" : (undefined) 00:02:13.734 Fetching value of define "__znver4__" : (undefined) 00:02:13.734 Library asan found: YES 00:02:13.734 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.734 Message: lib/log: Defining dependency "log" 00:02:13.734 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.734 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.734 Library rt found: YES 00:02:13.734 Checking for function "getentropy" : NO 00:02:13.734 Message: lib/eal: Defining dependency "eal" 00:02:13.734 Message: lib/ring: Defining dependency "ring" 00:02:13.734 Message: lib/rcu: Defining dependency "rcu" 00:02:13.734 Message: lib/mempool: Defining dependency "mempool" 00:02:13.734 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.734 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.734 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.734 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.734 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.734 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.734 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:13.734 Compiler for C supports arguments -mpclmul: YES 00:02:13.734 Compiler for C supports arguments -maes: YES 00:02:13.734 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.734 Compiler for C supports arguments -mavx512bw: YES 00:02:13.734 Compiler for C supports arguments -mavx512dq: YES 00:02:13.734 Compiler for C supports arguments -mavx512vl: YES 00:02:13.734 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.734 Compiler for C supports arguments -mavx2: YES 00:02:13.734 Compiler for C supports arguments -mavx: YES 00:02:13.734 Message: lib/net: Defining dependency "net" 00:02:13.734 Message: lib/meter: Defining dependency "meter" 00:02:13.734 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.734 Message: lib/pci: Defining dependency "pci" 00:02:13.734 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.734 Message: lib/hash: Defining dependency "hash" 00:02:13.734 Message: lib/timer: Defining dependency "timer" 00:02:13.734 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.734 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.734 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.734 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.734 Message: lib/power: Defining dependency "power" 00:02:13.734 Message: lib/reorder: Defining dependency "reorder" 00:02:13.734 Message: lib/security: Defining dependency "security" 00:02:13.734 Has header "linux/userfaultfd.h" : YES 00:02:13.734 Has header "linux/vduse.h" : YES 00:02:13.734 Message: lib/vhost: Defining dependency "vhost" 00:02:13.734 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.734 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.734 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.734 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.734 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:13.734 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:13.734 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:13.734 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:13.734 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:13.734 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:13.734 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:13.734 Configuring doxy-api-html.conf using configuration 00:02:13.734 Configuring doxy-api-man.conf using configuration 00:02:13.734 Program mandb found: YES (/usr/bin/mandb) 00:02:13.734 Program sphinx-build found: NO 00:02:13.734 Configuring rte_build_config.h using configuration 00:02:13.734 Message: 00:02:13.734 ================= 00:02:13.734 Applications Enabled 00:02:13.734 ================= 00:02:13.734 00:02:13.734 apps: 00:02:13.734 00:02:13.734 00:02:13.734 Message: 00:02:13.734 ================= 00:02:13.734 Libraries Enabled 00:02:13.734 ================= 00:02:13.734 00:02:13.734 libs: 00:02:13.734 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:13.734 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:13.734 cryptodev, dmadev, power, reorder, security, vhost, 00:02:13.734 00:02:13.734 Message: 00:02:13.734 =============== 00:02:13.734 Drivers Enabled 00:02:13.734 =============== 00:02:13.734 00:02:13.734 common: 00:02:13.734 00:02:13.734 bus: 00:02:13.734 pci, vdev, 00:02:13.734 mempool: 00:02:13.734 ring, 00:02:13.734 dma: 00:02:13.734 00:02:13.734 net: 00:02:13.734 00:02:13.734 crypto: 00:02:13.734 00:02:13.734 compress: 00:02:13.734 00:02:13.734 vdpa: 00:02:13.734 00:02:13.734 00:02:13.734 Message: 00:02:13.734 ================= 00:02:13.734 Content Skipped 00:02:13.734 ================= 00:02:13.734 00:02:13.734 apps: 00:02:13.734 dumpcap: explicitly disabled via build config 00:02:13.734 graph: explicitly disabled via build config 00:02:13.734 pdump: explicitly disabled via build config 00:02:13.734 proc-info: explicitly disabled via build config 00:02:13.734 test-acl: explicitly disabled via build config 00:02:13.734 test-bbdev: explicitly disabled via build config 00:02:13.734 test-cmdline: explicitly disabled via build config 00:02:13.734 test-compress-perf: explicitly disabled via build config 00:02:13.734 test-crypto-perf: explicitly disabled via build config 00:02:13.734 test-dma-perf: explicitly disabled via build config 00:02:13.734 test-eventdev: explicitly disabled via build config 00:02:13.734 test-fib: explicitly disabled via build config 00:02:13.734 test-flow-perf: explicitly disabled via build config 00:02:13.734 test-gpudev: explicitly disabled via build config 00:02:13.734 test-mldev: explicitly disabled via build config 00:02:13.734 test-pipeline: explicitly disabled via build config 00:02:13.734 test-pmd: explicitly disabled via build config 00:02:13.734 test-regex: explicitly disabled via build config 00:02:13.734 test-sad: explicitly disabled via build config 00:02:13.734 test-security-perf: explicitly disabled via build config 00:02:13.734 00:02:13.734 libs: 00:02:13.734 argparse: explicitly disabled via build config 00:02:13.734 metrics: explicitly disabled via build config 00:02:13.734 acl: explicitly disabled via build config 00:02:13.734 bbdev: explicitly disabled via build config 00:02:13.734 bitratestats: explicitly disabled via build config 00:02:13.734 bpf: explicitly disabled via build config 00:02:13.734 cfgfile: explicitly disabled via build config 00:02:13.734 distributor: explicitly disabled via build config 00:02:13.734 efd: explicitly disabled via build config 00:02:13.734 eventdev: explicitly disabled via build config 00:02:13.734 dispatcher: explicitly disabled via build config 00:02:13.734 gpudev: explicitly disabled via build config 00:02:13.734 gro: explicitly disabled via build config 00:02:13.734 gso: explicitly disabled via build config 00:02:13.734 ip_frag: explicitly disabled via build config 00:02:13.734 jobstats: explicitly disabled via build config 00:02:13.734 latencystats: explicitly disabled via build config 00:02:13.734 lpm: explicitly disabled via build config 00:02:13.734 member: explicitly disabled via build config 00:02:13.734 pcapng: explicitly disabled via build config 00:02:13.734 rawdev: explicitly disabled via build config 00:02:13.734 regexdev: explicitly disabled via build config 00:02:13.734 mldev: explicitly disabled via build config 00:02:13.734 rib: explicitly disabled via build config 00:02:13.734 sched: explicitly disabled via build config 00:02:13.734 stack: explicitly disabled via build config 00:02:13.734 ipsec: explicitly disabled via build config 00:02:13.734 pdcp: explicitly disabled via build config 00:02:13.734 fib: explicitly disabled via build config 00:02:13.734 port: explicitly disabled via build config 00:02:13.734 pdump: explicitly disabled via build config 00:02:13.734 table: explicitly disabled via build config 00:02:13.734 pipeline: explicitly disabled via build config 00:02:13.734 graph: explicitly disabled via build config 00:02:13.734 node: explicitly disabled via build config 00:02:13.734 00:02:13.734 drivers: 00:02:13.734 common/cpt: not in enabled drivers build config 00:02:13.734 common/dpaax: not in enabled drivers build config 00:02:13.734 common/iavf: not in enabled drivers build config 00:02:13.734 common/idpf: not in enabled drivers build config 00:02:13.734 common/ionic: not in enabled drivers build config 00:02:13.734 common/mvep: not in enabled drivers build config 00:02:13.734 common/octeontx: not in enabled drivers build config 00:02:13.734 bus/auxiliary: not in enabled drivers build config 00:02:13.734 bus/cdx: not in enabled drivers build config 00:02:13.734 bus/dpaa: not in enabled drivers build config 00:02:13.734 bus/fslmc: not in enabled drivers build config 00:02:13.734 bus/ifpga: not in enabled drivers build config 00:02:13.734 bus/platform: not in enabled drivers build config 00:02:13.734 bus/uacce: not in enabled drivers build config 00:02:13.734 bus/vmbus: not in enabled drivers build config 00:02:13.734 common/cnxk: not in enabled drivers build config 00:02:13.734 common/mlx5: not in enabled drivers build config 00:02:13.734 common/nfp: not in enabled drivers build config 00:02:13.734 common/nitrox: not in enabled drivers build config 00:02:13.734 common/qat: not in enabled drivers build config 00:02:13.734 common/sfc_efx: not in enabled drivers build config 00:02:13.734 mempool/bucket: not in enabled drivers build config 00:02:13.734 mempool/cnxk: not in enabled drivers build config 00:02:13.734 mempool/dpaa: not in enabled drivers build config 00:02:13.735 mempool/dpaa2: not in enabled drivers build config 00:02:13.735 mempool/octeontx: not in enabled drivers build config 00:02:13.735 mempool/stack: not in enabled drivers build config 00:02:13.735 dma/cnxk: not in enabled drivers build config 00:02:13.735 dma/dpaa: not in enabled drivers build config 00:02:13.735 dma/dpaa2: not in enabled drivers build config 00:02:13.735 dma/hisilicon: not in enabled drivers build config 00:02:13.735 dma/idxd: not in enabled drivers build config 00:02:13.735 dma/ioat: not in enabled drivers build config 00:02:13.735 dma/skeleton: not in enabled drivers build config 00:02:13.735 net/af_packet: not in enabled drivers build config 00:02:13.735 net/af_xdp: not in enabled drivers build config 00:02:13.735 net/ark: not in enabled drivers build config 00:02:13.735 net/atlantic: not in enabled drivers build config 00:02:13.735 net/avp: not in enabled drivers build config 00:02:13.735 net/axgbe: not in enabled drivers build config 00:02:13.735 net/bnx2x: not in enabled drivers build config 00:02:13.735 net/bnxt: not in enabled drivers build config 00:02:13.735 net/bonding: not in enabled drivers build config 00:02:13.735 net/cnxk: not in enabled drivers build config 00:02:13.735 net/cpfl: not in enabled drivers build config 00:02:13.735 net/cxgbe: not in enabled drivers build config 00:02:13.735 net/dpaa: not in enabled drivers build config 00:02:13.735 net/dpaa2: not in enabled drivers build config 00:02:13.735 net/e1000: not in enabled drivers build config 00:02:13.735 net/ena: not in enabled drivers build config 00:02:13.735 net/enetc: not in enabled drivers build config 00:02:13.735 net/enetfec: not in enabled drivers build config 00:02:13.735 net/enic: not in enabled drivers build config 00:02:13.735 net/failsafe: not in enabled drivers build config 00:02:13.735 net/fm10k: not in enabled drivers build config 00:02:13.735 net/gve: not in enabled drivers build config 00:02:13.735 net/hinic: not in enabled drivers build config 00:02:13.735 net/hns3: not in enabled drivers build config 00:02:13.735 net/i40e: not in enabled drivers build config 00:02:13.735 net/iavf: not in enabled drivers build config 00:02:13.735 net/ice: not in enabled drivers build config 00:02:13.735 net/idpf: not in enabled drivers build config 00:02:13.735 net/igc: not in enabled drivers build config 00:02:13.735 net/ionic: not in enabled drivers build config 00:02:13.735 net/ipn3ke: not in enabled drivers build config 00:02:13.735 net/ixgbe: not in enabled drivers build config 00:02:13.735 net/mana: not in enabled drivers build config 00:02:13.735 net/memif: not in enabled drivers build config 00:02:13.735 net/mlx4: not in enabled drivers build config 00:02:13.735 net/mlx5: not in enabled drivers build config 00:02:13.735 net/mvneta: not in enabled drivers build config 00:02:13.735 net/mvpp2: not in enabled drivers build config 00:02:13.735 net/netvsc: not in enabled drivers build config 00:02:13.735 net/nfb: not in enabled drivers build config 00:02:13.735 net/nfp: not in enabled drivers build config 00:02:13.735 net/ngbe: not in enabled drivers build config 00:02:13.735 net/null: not in enabled drivers build config 00:02:13.735 net/octeontx: not in enabled drivers build config 00:02:13.735 net/octeon_ep: not in enabled drivers build config 00:02:13.735 net/pcap: not in enabled drivers build config 00:02:13.735 net/pfe: not in enabled drivers build config 00:02:13.735 net/qede: not in enabled drivers build config 00:02:13.735 net/ring: not in enabled drivers build config 00:02:13.735 net/sfc: not in enabled drivers build config 00:02:13.735 net/softnic: not in enabled drivers build config 00:02:13.735 net/tap: not in enabled drivers build config 00:02:13.735 net/thunderx: not in enabled drivers build config 00:02:13.735 net/txgbe: not in enabled drivers build config 00:02:13.735 net/vdev_netvsc: not in enabled drivers build config 00:02:13.735 net/vhost: not in enabled drivers build config 00:02:13.735 net/virtio: not in enabled drivers build config 00:02:13.735 net/vmxnet3: not in enabled drivers build config 00:02:13.735 raw/*: missing internal dependency, "rawdev" 00:02:13.735 crypto/armv8: not in enabled drivers build config 00:02:13.735 crypto/bcmfs: not in enabled drivers build config 00:02:13.735 crypto/caam_jr: not in enabled drivers build config 00:02:13.735 crypto/ccp: not in enabled drivers build config 00:02:13.735 crypto/cnxk: not in enabled drivers build config 00:02:13.735 crypto/dpaa_sec: not in enabled drivers build config 00:02:13.735 crypto/dpaa2_sec: not in enabled drivers build config 00:02:13.735 crypto/ipsec_mb: not in enabled drivers build config 00:02:13.735 crypto/mlx5: not in enabled drivers build config 00:02:13.735 crypto/mvsam: not in enabled drivers build config 00:02:13.735 crypto/nitrox: not in enabled drivers build config 00:02:13.735 crypto/null: not in enabled drivers build config 00:02:13.735 crypto/octeontx: not in enabled drivers build config 00:02:13.735 crypto/openssl: not in enabled drivers build config 00:02:13.735 crypto/scheduler: not in enabled drivers build config 00:02:13.735 crypto/uadk: not in enabled drivers build config 00:02:13.735 crypto/virtio: not in enabled drivers build config 00:02:13.735 compress/isal: not in enabled drivers build config 00:02:13.735 compress/mlx5: not in enabled drivers build config 00:02:13.735 compress/nitrox: not in enabled drivers build config 00:02:13.735 compress/octeontx: not in enabled drivers build config 00:02:13.735 compress/zlib: not in enabled drivers build config 00:02:13.735 regex/*: missing internal dependency, "regexdev" 00:02:13.735 ml/*: missing internal dependency, "mldev" 00:02:13.735 vdpa/ifc: not in enabled drivers build config 00:02:13.735 vdpa/mlx5: not in enabled drivers build config 00:02:13.735 vdpa/nfp: not in enabled drivers build config 00:02:13.735 vdpa/sfc: not in enabled drivers build config 00:02:13.735 event/*: missing internal dependency, "eventdev" 00:02:13.735 baseband/*: missing internal dependency, "bbdev" 00:02:13.735 gpu/*: missing internal dependency, "gpudev" 00:02:13.735 00:02:13.735 00:02:13.735 Build targets in project: 85 00:02:13.735 00:02:13.735 DPDK 24.03.0 00:02:13.735 00:02:13.735 User defined options 00:02:13.735 buildtype : debug 00:02:13.735 default_library : shared 00:02:13.735 libdir : lib 00:02:13.735 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.735 b_sanitize : address 00:02:13.735 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:13.735 c_link_args : 00:02:13.735 cpu_instruction_set: native 00:02:13.735 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:13.735 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:13.735 enable_docs : false 00:02:13.735 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:13.735 enable_kmods : false 00:02:13.735 max_lcores : 128 00:02:13.735 tests : false 00:02:13.735 00:02:13.735 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.735 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:13.735 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.735 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.735 [3/268] Linking static target lib/librte_kvargs.a 00:02:13.735 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.735 [5/268] Linking static target lib/librte_log.a 00:02:13.735 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.735 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.735 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.735 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.735 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.735 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.735 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.735 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.735 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.735 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.026 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.283 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.283 [18/268] Linking static target lib/librte_telemetry.a 00:02:14.283 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.283 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.541 [21/268] Linking target lib/librte_log.so.24.1 00:02:14.541 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.541 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.541 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.541 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.541 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.798 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.798 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:14.798 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.798 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:14.798 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.798 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.055 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:15.055 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.312 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.312 [36/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.312 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.312 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.312 [39/268] Linking target lib/librte_telemetry.so.24.1 00:02:15.312 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.312 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.569 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.569 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.569 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:15.827 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.827 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.827 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.827 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.084 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.084 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.084 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.341 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.341 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.341 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.341 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.598 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.598 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.598 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.598 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.855 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.855 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.855 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:17.111 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:17.112 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.112 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.112 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.368 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:17.368 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:17.639 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:17.639 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:17.639 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:17.639 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.639 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:17.897 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.897 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.897 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.897 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.897 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:17.897 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:18.154 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.154 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:18.154 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.154 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:18.154 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.154 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.413 [86/268] Linking static target lib/librte_ring.a 00:02:18.413 [87/268] Linking static target lib/librte_eal.a 00:02:18.413 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.669 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.669 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:18.669 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.669 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.669 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.669 [94/268] Linking static target lib/librte_mempool.a 00:02:18.925 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.182 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.182 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.439 [98/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.439 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:19.439 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:19.439 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:19.439 [102/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.439 [103/268] Linking static target lib/librte_rcu.a 00:02:19.439 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.695 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.695 [106/268] Linking static target lib/librte_net.a 00:02:19.695 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:19.695 [108/268] Linking static target lib/librte_mbuf.a 00:02:19.695 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.695 [110/268] Linking static target lib/librte_meter.a 00:02:19.952 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.952 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:20.210 [113/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.210 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:20.210 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.210 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.210 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.468 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:20.725 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.983 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.983 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:21.241 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.241 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:21.499 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:21.499 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:21.499 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.499 [127/268] Linking static target lib/librte_pci.a 00:02:21.499 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.758 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:21.758 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.758 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.758 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.016 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:22.016 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.016 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.016 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.016 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:22.016 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.016 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.016 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.273 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.273 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.273 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.273 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:22.273 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.531 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.789 [147/268] Linking static target lib/librte_cmdline.a 00:02:22.789 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.789 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:22.789 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.789 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:23.047 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:23.305 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:23.305 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:23.305 [155/268] Linking static target lib/librte_timer.a 00:02:23.305 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:23.305 [157/268] Linking static target lib/librte_ethdev.a 00:02:23.562 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.562 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:23.562 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.562 [161/268] Linking static target lib/librte_compressdev.a 00:02:23.562 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:23.562 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:23.562 [164/268] Linking static target lib/librte_hash.a 00:02:24.127 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.128 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.128 [167/268] Linking static target lib/librte_dmadev.a 00:02:24.128 [168/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.128 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.128 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.385 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.386 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.644 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.644 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.644 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.902 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.902 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.902 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.160 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.160 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.160 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.160 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.419 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.419 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.419 [185/268] Linking static target lib/librte_cryptodev.a 00:02:25.419 [186/268] Linking static target lib/librte_power.a 00:02:25.677 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.677 [188/268] Linking static target lib/librte_reorder.a 00:02:25.677 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.677 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.935 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.935 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.935 [193/268] Linking static target lib/librte_security.a 00:02:26.200 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.488 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.748 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.748 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.748 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:26.748 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.006 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.265 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.265 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:27.265 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.265 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:27.524 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.784 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:27.784 [207/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:27.784 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:27.784 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:27.784 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:28.043 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.043 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:28.043 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:28.043 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.043 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.043 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:28.043 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.043 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.043 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.043 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:28.043 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:28.303 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:28.303 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.303 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.303 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:28.561 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.820 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.724 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:30.724 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.724 [230/268] Linking target lib/librte_eal.so.24.1 00:02:30.982 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:30.982 [232/268] Linking target lib/librte_ring.so.24.1 00:02:30.982 [233/268] Linking target lib/librte_meter.so.24.1 00:02:30.982 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:30.982 [235/268] Linking target lib/librte_timer.so.24.1 00:02:30.982 [236/268] Linking target lib/librte_pci.so.24.1 00:02:30.982 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:30.982 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:30.982 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:30.982 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:31.239 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:31.239 [242/268] Linking target lib/librte_mempool.so.24.1 00:02:31.239 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:31.239 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:31.239 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:31.239 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:31.239 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:31.239 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:31.239 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:31.497 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:31.497 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:31.497 [252/268] Linking target lib/librte_net.so.24.1 00:02:31.497 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:31.497 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:31.756 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:31.756 [256/268] Linking target lib/librte_cmdline.so.24.1 00:02:31.756 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:31.756 [258/268] Linking target lib/librte_hash.so.24.1 00:02:31.756 [259/268] Linking target lib/librte_security.so.24.1 00:02:32.014 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:32.596 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.596 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:32.854 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:32.854 [264/268] Linking target lib/librte_power.so.24.1 00:02:35.388 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:35.388 [266/268] Linking static target lib/librte_vhost.a 00:02:37.941 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.941 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:37.941 INFO: autodetecting backend as ninja 00:02:37.941 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:59.916 CC lib/log/log.o 00:02:59.916 CC lib/log/log_flags.o 00:02:59.916 CC lib/log/log_deprecated.o 00:02:59.916 CC lib/ut/ut.o 00:02:59.916 CC lib/ut_mock/mock.o 00:02:59.916 LIB libspdk_log.a 00:02:59.916 LIB libspdk_ut.a 00:02:59.916 LIB libspdk_ut_mock.a 00:02:59.916 SO libspdk_log.so.7.1 00:02:59.916 SO libspdk_ut_mock.so.6.0 00:02:59.916 SO libspdk_ut.so.2.0 00:02:59.916 SYMLINK libspdk_log.so 00:02:59.916 SYMLINK libspdk_ut_mock.so 00:02:59.916 SYMLINK libspdk_ut.so 00:02:59.916 CXX lib/trace_parser/trace.o 00:02:59.916 CC lib/util/base64.o 00:02:59.916 CC lib/util/bit_array.o 00:02:59.916 CC lib/util/crc32c.o 00:02:59.916 CC lib/util/crc16.o 00:02:59.916 CC lib/util/crc32.o 00:02:59.916 CC lib/util/cpuset.o 00:02:59.916 CC lib/ioat/ioat.o 00:02:59.916 CC lib/dma/dma.o 00:02:59.916 CC lib/vfio_user/host/vfio_user_pci.o 00:02:59.916 CC lib/util/crc32_ieee.o 00:02:59.916 CC lib/util/crc64.o 00:02:59.916 CC lib/util/dif.o 00:02:59.916 CC lib/util/fd.o 00:02:59.916 CC lib/util/fd_group.o 00:02:59.916 CC lib/util/file.o 00:02:59.916 LIB libspdk_dma.a 00:02:59.916 SO libspdk_dma.so.5.0 00:02:59.916 CC lib/vfio_user/host/vfio_user.o 00:02:59.916 CC lib/util/hexlify.o 00:02:59.916 LIB libspdk_ioat.a 00:02:59.916 CC lib/util/iov.o 00:02:59.916 SYMLINK libspdk_dma.so 00:02:59.916 CC lib/util/math.o 00:02:59.916 SO libspdk_ioat.so.7.0 00:02:59.916 CC lib/util/net.o 00:02:59.916 CC lib/util/pipe.o 00:02:59.916 SYMLINK libspdk_ioat.so 00:02:59.916 CC lib/util/strerror_tls.o 00:02:59.916 CC lib/util/string.o 00:02:59.916 LIB libspdk_vfio_user.a 00:02:59.916 CC lib/util/uuid.o 00:02:59.916 CC lib/util/xor.o 00:02:59.916 CC lib/util/zipf.o 00:02:59.916 SO libspdk_vfio_user.so.5.0 00:02:59.916 CC lib/util/md5.o 00:02:59.916 SYMLINK libspdk_vfio_user.so 00:02:59.916 LIB libspdk_util.a 00:02:59.916 LIB libspdk_trace_parser.a 00:02:59.916 SO libspdk_util.so.10.1 00:02:59.916 SO libspdk_trace_parser.so.6.0 00:02:59.916 SYMLINK libspdk_util.so 00:02:59.916 SYMLINK libspdk_trace_parser.so 00:02:59.916 CC lib/json/json_parse.o 00:02:59.916 CC lib/json/json_util.o 00:02:59.916 CC lib/vmd/vmd.o 00:02:59.916 CC lib/json/json_write.o 00:02:59.916 CC lib/vmd/led.o 00:02:59.916 CC lib/rdma_utils/rdma_utils.o 00:02:59.916 CC lib/env_dpdk/env.o 00:02:59.916 CC lib/env_dpdk/memory.o 00:02:59.916 CC lib/conf/conf.o 00:02:59.916 CC lib/idxd/idxd.o 00:02:59.916 CC lib/idxd/idxd_user.o 00:02:59.916 CC lib/idxd/idxd_kernel.o 00:02:59.916 CC lib/env_dpdk/pci.o 00:02:59.916 LIB libspdk_conf.a 00:02:59.916 LIB libspdk_rdma_utils.a 00:02:59.916 SO libspdk_conf.so.6.0 00:02:59.916 LIB libspdk_json.a 00:02:59.916 SO libspdk_rdma_utils.so.1.0 00:02:59.916 SO libspdk_json.so.6.0 00:02:59.916 SYMLINK libspdk_conf.so 00:02:59.916 SYMLINK libspdk_rdma_utils.so 00:02:59.916 CC lib/env_dpdk/init.o 00:02:59.916 CC lib/env_dpdk/threads.o 00:02:59.916 SYMLINK libspdk_json.so 00:02:59.916 CC lib/env_dpdk/pci_ioat.o 00:02:59.916 CC lib/rdma_provider/common.o 00:02:59.916 CC lib/env_dpdk/pci_virtio.o 00:02:59.916 CC lib/jsonrpc/jsonrpc_server.o 00:02:59.916 CC lib/env_dpdk/pci_vmd.o 00:03:00.175 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:00.175 CC lib/env_dpdk/pci_idxd.o 00:03:00.175 CC lib/env_dpdk/pci_event.o 00:03:00.175 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:00.175 LIB libspdk_idxd.a 00:03:00.175 LIB libspdk_vmd.a 00:03:00.175 CC lib/env_dpdk/sigbus_handler.o 00:03:00.175 SO libspdk_idxd.so.12.1 00:03:00.175 SO libspdk_vmd.so.6.0 00:03:00.175 CC lib/jsonrpc/jsonrpc_client.o 00:03:00.175 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.175 CC lib/env_dpdk/pci_dpdk.o 00:03:00.456 SYMLINK libspdk_vmd.so 00:03:00.456 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.456 SYMLINK libspdk_idxd.so 00:03:00.456 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.456 LIB libspdk_rdma_provider.a 00:03:00.456 SO libspdk_rdma_provider.so.7.0 00:03:00.456 SYMLINK libspdk_rdma_provider.so 00:03:00.456 LIB libspdk_jsonrpc.a 00:03:00.744 SO libspdk_jsonrpc.so.6.0 00:03:00.744 SYMLINK libspdk_jsonrpc.so 00:03:01.004 CC lib/rpc/rpc.o 00:03:01.263 LIB libspdk_env_dpdk.a 00:03:01.263 LIB libspdk_rpc.a 00:03:01.263 SO libspdk_env_dpdk.so.15.1 00:03:01.263 SO libspdk_rpc.so.6.0 00:03:01.522 SYMLINK libspdk_rpc.so 00:03:01.522 SYMLINK libspdk_env_dpdk.so 00:03:01.781 CC lib/notify/notify.o 00:03:01.781 CC lib/notify/notify_rpc.o 00:03:01.781 CC lib/keyring/keyring.o 00:03:01.781 CC lib/keyring/keyring_rpc.o 00:03:01.781 CC lib/trace/trace.o 00:03:01.781 CC lib/trace/trace_rpc.o 00:03:01.781 CC lib/trace/trace_flags.o 00:03:02.040 LIB libspdk_notify.a 00:03:02.040 SO libspdk_notify.so.6.0 00:03:02.040 LIB libspdk_keyring.a 00:03:02.040 SYMLINK libspdk_notify.so 00:03:02.040 LIB libspdk_trace.a 00:03:02.040 SO libspdk_keyring.so.2.0 00:03:02.040 SO libspdk_trace.so.11.0 00:03:02.040 SYMLINK libspdk_keyring.so 00:03:02.298 SYMLINK libspdk_trace.so 00:03:02.557 CC lib/thread/thread.o 00:03:02.557 CC lib/thread/iobuf.o 00:03:02.557 CC lib/sock/sock.o 00:03:02.557 CC lib/sock/sock_rpc.o 00:03:03.123 LIB libspdk_sock.a 00:03:03.123 SO libspdk_sock.so.10.0 00:03:03.123 SYMLINK libspdk_sock.so 00:03:03.381 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.381 CC lib/nvme/nvme_ctrlr.o 00:03:03.381 CC lib/nvme/nvme_fabric.o 00:03:03.381 CC lib/nvme/nvme_ns_cmd.o 00:03:03.381 CC lib/nvme/nvme_ns.o 00:03:03.381 CC lib/nvme/nvme_pcie_common.o 00:03:03.381 CC lib/nvme/nvme_qpair.o 00:03:03.381 CC lib/nvme/nvme_pcie.o 00:03:03.381 CC lib/nvme/nvme.o 00:03:04.321 LIB libspdk_thread.a 00:03:04.321 CC lib/nvme/nvme_quirks.o 00:03:04.321 CC lib/nvme/nvme_transport.o 00:03:04.321 SO libspdk_thread.so.11.0 00:03:04.321 CC lib/nvme/nvme_discovery.o 00:03:04.321 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:04.321 SYMLINK libspdk_thread.so 00:03:04.321 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:04.321 CC lib/nvme/nvme_tcp.o 00:03:04.580 CC lib/nvme/nvme_opal.o 00:03:04.580 CC lib/accel/accel.o 00:03:04.580 CC lib/nvme/nvme_io_msg.o 00:03:04.840 CC lib/nvme/nvme_poll_group.o 00:03:04.840 CC lib/nvme/nvme_zns.o 00:03:05.098 CC lib/nvme/nvme_stubs.o 00:03:05.098 CC lib/accel/accel_rpc.o 00:03:05.098 CC lib/blob/blobstore.o 00:03:05.098 CC lib/blob/request.o 00:03:05.098 CC lib/blob/zeroes.o 00:03:05.357 CC lib/blob/blob_bs_dev.o 00:03:05.357 CC lib/accel/accel_sw.o 00:03:05.357 CC lib/nvme/nvme_auth.o 00:03:05.357 CC lib/nvme/nvme_cuse.o 00:03:05.357 CC lib/nvme/nvme_rdma.o 00:03:05.616 CC lib/init/json_config.o 00:03:05.616 CC lib/virtio/virtio.o 00:03:05.616 CC lib/virtio/virtio_vhost_user.o 00:03:05.616 LIB libspdk_accel.a 00:03:05.876 SO libspdk_accel.so.16.0 00:03:05.876 SYMLINK libspdk_accel.so 00:03:05.876 CC lib/init/subsystem.o 00:03:05.876 CC lib/init/subsystem_rpc.o 00:03:05.876 CC lib/virtio/virtio_vfio_user.o 00:03:06.136 CC lib/virtio/virtio_pci.o 00:03:06.136 CC lib/init/rpc.o 00:03:06.136 CC lib/fsdev/fsdev.o 00:03:06.136 CC lib/fsdev/fsdev_io.o 00:03:06.136 CC lib/bdev/bdev.o 00:03:06.136 LIB libspdk_init.a 00:03:06.136 CC lib/bdev/bdev_rpc.o 00:03:06.136 SO libspdk_init.so.6.0 00:03:06.396 SYMLINK libspdk_init.so 00:03:06.396 CC lib/bdev/bdev_zone.o 00:03:06.396 LIB libspdk_virtio.a 00:03:06.396 SO libspdk_virtio.so.7.0 00:03:06.396 CC lib/fsdev/fsdev_rpc.o 00:03:06.396 SYMLINK libspdk_virtio.so 00:03:06.396 CC lib/bdev/part.o 00:03:06.656 CC lib/event/app.o 00:03:06.656 CC lib/bdev/scsi_nvme.o 00:03:06.656 CC lib/event/reactor.o 00:03:06.656 CC lib/event/log_rpc.o 00:03:06.656 CC lib/event/app_rpc.o 00:03:06.656 CC lib/event/scheduler_static.o 00:03:06.915 LIB libspdk_fsdev.a 00:03:06.915 SO libspdk_fsdev.so.2.0 00:03:06.915 SYMLINK libspdk_fsdev.so 00:03:07.175 LIB libspdk_event.a 00:03:07.175 LIB libspdk_nvme.a 00:03:07.175 SO libspdk_event.so.14.0 00:03:07.175 SYMLINK libspdk_event.so 00:03:07.175 SO libspdk_nvme.so.15.0 00:03:07.434 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:07.693 SYMLINK libspdk_nvme.so 00:03:08.262 LIB libspdk_fuse_dispatcher.a 00:03:08.262 SO libspdk_fuse_dispatcher.so.1.0 00:03:08.262 SYMLINK libspdk_fuse_dispatcher.so 00:03:08.831 LIB libspdk_blob.a 00:03:08.832 SO libspdk_blob.so.12.0 00:03:09.090 SYMLINK libspdk_blob.so 00:03:09.350 LIB libspdk_bdev.a 00:03:09.350 CC lib/blobfs/blobfs.o 00:03:09.350 CC lib/blobfs/tree.o 00:03:09.350 CC lib/lvol/lvol.o 00:03:09.350 SO libspdk_bdev.so.17.0 00:03:09.608 SYMLINK libspdk_bdev.so 00:03:09.867 CC lib/ftl/ftl_core.o 00:03:09.867 CC lib/ftl/ftl_init.o 00:03:09.867 CC lib/ftl/ftl_layout.o 00:03:09.867 CC lib/ftl/ftl_debug.o 00:03:09.867 CC lib/ublk/ublk.o 00:03:09.867 CC lib/nvmf/ctrlr.o 00:03:09.867 CC lib/scsi/dev.o 00:03:09.867 CC lib/nbd/nbd.o 00:03:09.867 CC lib/scsi/lun.o 00:03:10.125 CC lib/ublk/ublk_rpc.o 00:03:10.125 CC lib/ftl/ftl_io.o 00:03:10.125 CC lib/ftl/ftl_sb.o 00:03:10.125 CC lib/nvmf/ctrlr_discovery.o 00:03:10.125 CC lib/ftl/ftl_l2p.o 00:03:10.125 CC lib/nbd/nbd_rpc.o 00:03:10.384 CC lib/scsi/port.o 00:03:10.385 CC lib/ftl/ftl_l2p_flat.o 00:03:10.385 LIB libspdk_blobfs.a 00:03:10.385 SO libspdk_blobfs.so.11.0 00:03:10.385 CC lib/nvmf/ctrlr_bdev.o 00:03:10.385 CC lib/nvmf/subsystem.o 00:03:10.385 LIB libspdk_nbd.a 00:03:10.385 SYMLINK libspdk_blobfs.so 00:03:10.385 CC lib/nvmf/nvmf.o 00:03:10.385 SO libspdk_nbd.so.7.0 00:03:10.385 CC lib/scsi/scsi.o 00:03:10.643 LIB libspdk_ublk.a 00:03:10.643 LIB libspdk_lvol.a 00:03:10.643 SO libspdk_ublk.so.3.0 00:03:10.643 SYMLINK libspdk_nbd.so 00:03:10.643 CC lib/ftl/ftl_nv_cache.o 00:03:10.643 CC lib/nvmf/nvmf_rpc.o 00:03:10.643 SO libspdk_lvol.so.11.0 00:03:10.643 SYMLINK libspdk_ublk.so 00:03:10.643 CC lib/ftl/ftl_band.o 00:03:10.643 SYMLINK libspdk_lvol.so 00:03:10.643 CC lib/ftl/ftl_band_ops.o 00:03:10.643 CC lib/scsi/scsi_bdev.o 00:03:10.901 CC lib/nvmf/transport.o 00:03:11.160 CC lib/nvmf/tcp.o 00:03:11.160 CC lib/nvmf/stubs.o 00:03:11.419 CC lib/scsi/scsi_pr.o 00:03:11.419 CC lib/nvmf/mdns_server.o 00:03:11.679 CC lib/nvmf/rdma.o 00:03:11.679 CC lib/nvmf/auth.o 00:03:11.679 CC lib/scsi/scsi_rpc.o 00:03:11.679 CC lib/ftl/ftl_writer.o 00:03:11.679 CC lib/scsi/task.o 00:03:11.938 CC lib/ftl/ftl_rq.o 00:03:11.938 CC lib/ftl/ftl_reloc.o 00:03:11.938 CC lib/ftl/ftl_l2p_cache.o 00:03:11.938 LIB libspdk_scsi.a 00:03:11.938 CC lib/ftl/ftl_p2l.o 00:03:11.938 CC lib/ftl/ftl_p2l_log.o 00:03:11.938 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.938 SO libspdk_scsi.so.9.0 00:03:12.196 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:12.196 SYMLINK libspdk_scsi.so 00:03:12.196 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:12.453 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.453 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.453 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.453 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.711 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.711 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.711 CC lib/iscsi/conn.o 00:03:12.711 CC lib/vhost/vhost.o 00:03:12.711 CC lib/vhost/vhost_rpc.o 00:03:12.711 CC lib/vhost/vhost_scsi.o 00:03:12.711 CC lib/vhost/vhost_blk.o 00:03:12.711 CC lib/vhost/rte_vhost_user.o 00:03:12.711 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.969 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.969 CC lib/iscsi/init_grp.o 00:03:13.225 CC lib/iscsi/iscsi.o 00:03:13.225 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:13.483 CC lib/iscsi/param.o 00:03:13.483 CC lib/iscsi/portal_grp.o 00:03:13.483 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.741 CC lib/ftl/utils/ftl_conf.o 00:03:13.741 CC lib/iscsi/tgt_node.o 00:03:13.741 CC lib/iscsi/iscsi_subsystem.o 00:03:13.741 CC lib/iscsi/iscsi_rpc.o 00:03:13.741 CC lib/ftl/utils/ftl_md.o 00:03:13.741 CC lib/iscsi/task.o 00:03:14.000 CC lib/ftl/utils/ftl_mempool.o 00:03:14.000 CC lib/ftl/utils/ftl_bitmap.o 00:03:14.000 LIB libspdk_vhost.a 00:03:14.000 CC lib/ftl/utils/ftl_property.o 00:03:14.000 SO libspdk_vhost.so.8.0 00:03:14.259 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:14.259 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:14.259 SYMLINK libspdk_vhost.so 00:03:14.259 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:14.259 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:14.519 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:14.519 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:14.519 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:14.519 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:14.519 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:14.519 CC lib/ftl/base/ftl_base_dev.o 00:03:14.519 CC lib/ftl/base/ftl_base_bdev.o 00:03:14.519 CC lib/ftl/ftl_trace.o 00:03:14.779 LIB libspdk_nvmf.a 00:03:14.779 LIB libspdk_ftl.a 00:03:15.037 SO libspdk_nvmf.so.20.0 00:03:15.037 LIB libspdk_iscsi.a 00:03:15.037 SO libspdk_iscsi.so.8.0 00:03:15.037 SO libspdk_ftl.so.9.0 00:03:15.295 SYMLINK libspdk_nvmf.so 00:03:15.295 SYMLINK libspdk_iscsi.so 00:03:15.295 SYMLINK libspdk_ftl.so 00:03:15.862 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.862 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.862 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.862 CC module/fsdev/aio/fsdev_aio.o 00:03:15.862 CC module/keyring/file/keyring.o 00:03:15.862 CC module/accel/error/accel_error.o 00:03:15.862 CC module/accel/ioat/accel_ioat.o 00:03:15.862 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.862 CC module/sock/posix/posix.o 00:03:15.862 CC module/blob/bdev/blob_bdev.o 00:03:15.862 LIB libspdk_env_dpdk_rpc.a 00:03:15.862 SO libspdk_env_dpdk_rpc.so.6.0 00:03:16.120 SYMLINK libspdk_env_dpdk_rpc.so 00:03:16.120 CC module/accel/ioat/accel_ioat_rpc.o 00:03:16.120 CC module/keyring/file/keyring_rpc.o 00:03:16.120 LIB libspdk_scheduler_dpdk_governor.a 00:03:16.120 LIB libspdk_scheduler_gscheduler.a 00:03:16.120 SO libspdk_scheduler_gscheduler.so.4.0 00:03:16.120 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:16.120 LIB libspdk_scheduler_dynamic.a 00:03:16.120 CC module/accel/error/accel_error_rpc.o 00:03:16.120 SO libspdk_scheduler_dynamic.so.4.0 00:03:16.120 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:16.120 SYMLINK libspdk_scheduler_gscheduler.so 00:03:16.120 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:16.120 CC module/fsdev/aio/linux_aio_mgr.o 00:03:16.120 LIB libspdk_accel_ioat.a 00:03:16.120 SYMLINK libspdk_scheduler_dynamic.so 00:03:16.120 LIB libspdk_keyring_file.a 00:03:16.120 SO libspdk_accel_ioat.so.6.0 00:03:16.120 SO libspdk_keyring_file.so.2.0 00:03:16.120 LIB libspdk_blob_bdev.a 00:03:16.378 CC module/keyring/linux/keyring.o 00:03:16.378 SO libspdk_blob_bdev.so.12.0 00:03:16.378 LIB libspdk_accel_error.a 00:03:16.378 SYMLINK libspdk_accel_ioat.so 00:03:16.378 CC module/keyring/linux/keyring_rpc.o 00:03:16.378 SYMLINK libspdk_keyring_file.so 00:03:16.378 SO libspdk_accel_error.so.2.0 00:03:16.378 SYMLINK libspdk_blob_bdev.so 00:03:16.378 CC module/accel/dsa/accel_dsa.o 00:03:16.378 CC module/accel/dsa/accel_dsa_rpc.o 00:03:16.378 SYMLINK libspdk_accel_error.so 00:03:16.378 LIB libspdk_keyring_linux.a 00:03:16.378 CC module/accel/iaa/accel_iaa.o 00:03:16.378 SO libspdk_keyring_linux.so.1.0 00:03:16.636 SYMLINK libspdk_keyring_linux.so 00:03:16.636 CC module/bdev/delay/vbdev_delay.o 00:03:16.636 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.636 CC module/bdev/gpt/gpt.o 00:03:16.636 CC module/bdev/error/vbdev_error.o 00:03:16.636 LIB libspdk_fsdev_aio.a 00:03:16.636 LIB libspdk_accel_dsa.a 00:03:16.636 CC module/accel/iaa/accel_iaa_rpc.o 00:03:16.636 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.636 SO libspdk_fsdev_aio.so.1.0 00:03:16.894 CC module/bdev/malloc/bdev_malloc.o 00:03:16.894 SO libspdk_accel_dsa.so.5.0 00:03:16.894 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.894 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.894 SYMLINK libspdk_fsdev_aio.so 00:03:16.894 SYMLINK libspdk_accel_dsa.so 00:03:16.894 LIB libspdk_sock_posix.a 00:03:16.894 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.894 LIB libspdk_accel_iaa.a 00:03:16.894 SO libspdk_accel_iaa.so.3.0 00:03:16.894 SO libspdk_sock_posix.so.6.0 00:03:16.894 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.894 SYMLINK libspdk_accel_iaa.so 00:03:16.894 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:17.153 SYMLINK libspdk_sock_posix.so 00:03:17.153 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:17.153 LIB libspdk_blobfs_bdev.a 00:03:17.153 CC module/bdev/null/bdev_null.o 00:03:17.153 SO libspdk_blobfs_bdev.so.6.0 00:03:17.153 LIB libspdk_bdev_error.a 00:03:17.153 SYMLINK libspdk_blobfs_bdev.so 00:03:17.153 LIB libspdk_bdev_gpt.a 00:03:17.153 CC module/bdev/null/bdev_null_rpc.o 00:03:17.153 SO libspdk_bdev_error.so.6.0 00:03:17.153 SO libspdk_bdev_gpt.so.6.0 00:03:17.153 CC module/bdev/nvme/bdev_nvme.o 00:03:17.153 LIB libspdk_bdev_delay.a 00:03:17.153 SYMLINK libspdk_bdev_error.so 00:03:17.153 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:17.153 LIB libspdk_bdev_malloc.a 00:03:17.153 SO libspdk_bdev_delay.so.6.0 00:03:17.153 SYMLINK libspdk_bdev_gpt.so 00:03:17.413 SO libspdk_bdev_malloc.so.6.0 00:03:17.413 CC module/bdev/nvme/nvme_rpc.o 00:03:17.413 SYMLINK libspdk_bdev_delay.so 00:03:17.413 SYMLINK libspdk_bdev_malloc.so 00:03:17.413 LIB libspdk_bdev_lvol.a 00:03:17.413 LIB libspdk_bdev_null.a 00:03:17.413 CC module/bdev/passthru/vbdev_passthru.o 00:03:17.413 SO libspdk_bdev_lvol.so.6.0 00:03:17.413 SO libspdk_bdev_null.so.6.0 00:03:17.413 CC module/bdev/raid/bdev_raid.o 00:03:17.413 CC module/bdev/split/vbdev_split.o 00:03:17.672 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:17.672 SYMLINK libspdk_bdev_lvol.so 00:03:17.672 CC module/bdev/aio/bdev_aio.o 00:03:17.672 SYMLINK libspdk_bdev_null.so 00:03:17.672 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.672 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.930 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.930 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:17.930 CC module/bdev/ftl/bdev_ftl.o 00:03:17.930 CC module/bdev/iscsi/bdev_iscsi.o 00:03:17.930 LIB libspdk_bdev_aio.a 00:03:17.930 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.930 SO libspdk_bdev_aio.so.6.0 00:03:17.930 LIB libspdk_bdev_split.a 00:03:17.930 LIB libspdk_bdev_passthru.a 00:03:17.930 SO libspdk_bdev_split.so.6.0 00:03:17.930 SO libspdk_bdev_passthru.so.6.0 00:03:18.189 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:18.189 SYMLINK libspdk_bdev_aio.so 00:03:18.189 CC module/bdev/raid/bdev_raid_sb.o 00:03:18.189 SYMLINK libspdk_bdev_split.so 00:03:18.189 CC module/bdev/nvme/bdev_mdns_client.o 00:03:18.189 SYMLINK libspdk_bdev_passthru.so 00:03:18.189 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:18.189 CC module/bdev/raid/raid0.o 00:03:18.189 LIB libspdk_bdev_zone_block.a 00:03:18.189 SO libspdk_bdev_zone_block.so.6.0 00:03:18.189 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:18.189 SYMLINK libspdk_bdev_zone_block.so 00:03:18.189 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:18.189 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:18.189 CC module/bdev/nvme/vbdev_opal.o 00:03:18.189 LIB libspdk_bdev_iscsi.a 00:03:18.189 LIB libspdk_bdev_ftl.a 00:03:18.448 SO libspdk_bdev_iscsi.so.6.0 00:03:18.448 SO libspdk_bdev_ftl.so.6.0 00:03:18.448 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:18.448 SYMLINK libspdk_bdev_iscsi.so 00:03:18.448 CC module/bdev/raid/raid1.o 00:03:18.448 SYMLINK libspdk_bdev_ftl.so 00:03:18.448 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:18.448 CC module/bdev/raid/concat.o 00:03:18.448 CC module/bdev/raid/raid5f.o 00:03:18.707 LIB libspdk_bdev_virtio.a 00:03:18.972 SO libspdk_bdev_virtio.so.6.0 00:03:18.972 SYMLINK libspdk_bdev_virtio.so 00:03:18.972 LIB libspdk_bdev_raid.a 00:03:19.231 SO libspdk_bdev_raid.so.6.0 00:03:19.231 SYMLINK libspdk_bdev_raid.so 00:03:20.615 LIB libspdk_bdev_nvme.a 00:03:20.615 SO libspdk_bdev_nvme.so.7.1 00:03:20.874 SYMLINK libspdk_bdev_nvme.so 00:03:21.444 CC module/event/subsystems/fsdev/fsdev.o 00:03:21.444 CC module/event/subsystems/sock/sock.o 00:03:21.444 CC module/event/subsystems/iobuf/iobuf.o 00:03:21.444 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:21.444 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:21.444 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:21.444 CC module/event/subsystems/vmd/vmd.o 00:03:21.444 CC module/event/subsystems/keyring/keyring.o 00:03:21.444 CC module/event/subsystems/scheduler/scheduler.o 00:03:21.444 LIB libspdk_event_fsdev.a 00:03:21.444 LIB libspdk_event_keyring.a 00:03:21.444 LIB libspdk_event_vhost_blk.a 00:03:21.444 LIB libspdk_event_vmd.a 00:03:21.444 LIB libspdk_event_scheduler.a 00:03:21.444 LIB libspdk_event_sock.a 00:03:21.444 SO libspdk_event_fsdev.so.1.0 00:03:21.444 SO libspdk_event_keyring.so.1.0 00:03:21.444 LIB libspdk_event_iobuf.a 00:03:21.444 SO libspdk_event_vhost_blk.so.3.0 00:03:21.444 SO libspdk_event_vmd.so.6.0 00:03:21.444 SO libspdk_event_sock.so.5.0 00:03:21.444 SO libspdk_event_scheduler.so.4.0 00:03:21.444 SO libspdk_event_iobuf.so.3.0 00:03:21.704 SYMLINK libspdk_event_fsdev.so 00:03:21.704 SYMLINK libspdk_event_keyring.so 00:03:21.704 SYMLINK libspdk_event_vhost_blk.so 00:03:21.704 SYMLINK libspdk_event_sock.so 00:03:21.704 SYMLINK libspdk_event_scheduler.so 00:03:21.704 SYMLINK libspdk_event_vmd.so 00:03:21.704 SYMLINK libspdk_event_iobuf.so 00:03:21.963 CC module/event/subsystems/accel/accel.o 00:03:22.222 LIB libspdk_event_accel.a 00:03:22.222 SO libspdk_event_accel.so.6.0 00:03:22.222 SYMLINK libspdk_event_accel.so 00:03:22.792 CC module/event/subsystems/bdev/bdev.o 00:03:22.793 LIB libspdk_event_bdev.a 00:03:22.793 SO libspdk_event_bdev.so.6.0 00:03:23.064 SYMLINK libspdk_event_bdev.so 00:03:23.330 CC module/event/subsystems/scsi/scsi.o 00:03:23.330 CC module/event/subsystems/ublk/ublk.o 00:03:23.330 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:23.330 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:23.330 CC module/event/subsystems/nbd/nbd.o 00:03:23.589 LIB libspdk_event_ublk.a 00:03:23.589 LIB libspdk_event_scsi.a 00:03:23.589 LIB libspdk_event_nbd.a 00:03:23.589 SO libspdk_event_ublk.so.3.0 00:03:23.589 SO libspdk_event_scsi.so.6.0 00:03:23.589 SO libspdk_event_nbd.so.6.0 00:03:23.589 LIB libspdk_event_nvmf.a 00:03:23.589 SYMLINK libspdk_event_ublk.so 00:03:23.589 SYMLINK libspdk_event_nbd.so 00:03:23.589 SYMLINK libspdk_event_scsi.so 00:03:23.589 SO libspdk_event_nvmf.so.6.0 00:03:23.589 SYMLINK libspdk_event_nvmf.so 00:03:24.157 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:24.157 CC module/event/subsystems/iscsi/iscsi.o 00:03:24.157 LIB libspdk_event_vhost_scsi.a 00:03:24.157 LIB libspdk_event_iscsi.a 00:03:24.157 SO libspdk_event_vhost_scsi.so.3.0 00:03:24.157 SO libspdk_event_iscsi.so.6.0 00:03:24.416 SYMLINK libspdk_event_vhost_scsi.so 00:03:24.416 SYMLINK libspdk_event_iscsi.so 00:03:24.677 SO libspdk.so.6.0 00:03:24.677 SYMLINK libspdk.so 00:03:24.936 CXX app/trace/trace.o 00:03:24.936 CC test/rpc_client/rpc_client_test.o 00:03:24.936 TEST_HEADER include/spdk/accel.h 00:03:24.936 TEST_HEADER include/spdk/accel_module.h 00:03:24.936 TEST_HEADER include/spdk/assert.h 00:03:24.936 TEST_HEADER include/spdk/barrier.h 00:03:24.936 TEST_HEADER include/spdk/base64.h 00:03:24.936 TEST_HEADER include/spdk/bdev.h 00:03:24.936 TEST_HEADER include/spdk/bdev_module.h 00:03:24.936 TEST_HEADER include/spdk/bdev_zone.h 00:03:24.936 TEST_HEADER include/spdk/bit_array.h 00:03:24.936 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:24.936 TEST_HEADER include/spdk/bit_pool.h 00:03:24.937 TEST_HEADER include/spdk/blob_bdev.h 00:03:24.937 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:24.937 TEST_HEADER include/spdk/blobfs.h 00:03:24.937 TEST_HEADER include/spdk/blob.h 00:03:24.937 TEST_HEADER include/spdk/conf.h 00:03:24.937 TEST_HEADER include/spdk/config.h 00:03:24.937 TEST_HEADER include/spdk/cpuset.h 00:03:24.937 TEST_HEADER include/spdk/crc16.h 00:03:24.937 TEST_HEADER include/spdk/crc32.h 00:03:24.937 TEST_HEADER include/spdk/crc64.h 00:03:24.937 TEST_HEADER include/spdk/dif.h 00:03:24.937 TEST_HEADER include/spdk/dma.h 00:03:24.937 TEST_HEADER include/spdk/endian.h 00:03:24.937 TEST_HEADER include/spdk/env_dpdk.h 00:03:24.937 TEST_HEADER include/spdk/env.h 00:03:24.937 TEST_HEADER include/spdk/event.h 00:03:24.937 TEST_HEADER include/spdk/fd_group.h 00:03:24.937 CC examples/util/zipf/zipf.o 00:03:24.937 TEST_HEADER include/spdk/fd.h 00:03:24.937 TEST_HEADER include/spdk/file.h 00:03:24.937 TEST_HEADER include/spdk/fsdev.h 00:03:24.937 CC test/thread/poller_perf/poller_perf.o 00:03:24.937 TEST_HEADER include/spdk/fsdev_module.h 00:03:24.937 TEST_HEADER include/spdk/ftl.h 00:03:24.937 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:24.937 CC examples/ioat/perf/perf.o 00:03:24.937 TEST_HEADER include/spdk/gpt_spec.h 00:03:24.937 TEST_HEADER include/spdk/hexlify.h 00:03:24.937 TEST_HEADER include/spdk/histogram_data.h 00:03:24.937 TEST_HEADER include/spdk/idxd.h 00:03:24.937 TEST_HEADER include/spdk/idxd_spec.h 00:03:24.937 TEST_HEADER include/spdk/init.h 00:03:24.937 TEST_HEADER include/spdk/ioat.h 00:03:24.937 TEST_HEADER include/spdk/ioat_spec.h 00:03:24.937 TEST_HEADER include/spdk/iscsi_spec.h 00:03:24.937 TEST_HEADER include/spdk/json.h 00:03:24.937 TEST_HEADER include/spdk/jsonrpc.h 00:03:24.937 TEST_HEADER include/spdk/keyring.h 00:03:24.937 TEST_HEADER include/spdk/keyring_module.h 00:03:24.937 TEST_HEADER include/spdk/likely.h 00:03:24.937 TEST_HEADER include/spdk/log.h 00:03:24.937 CC test/dma/test_dma/test_dma.o 00:03:24.937 TEST_HEADER include/spdk/lvol.h 00:03:24.937 TEST_HEADER include/spdk/md5.h 00:03:24.937 TEST_HEADER include/spdk/memory.h 00:03:24.937 TEST_HEADER include/spdk/mmio.h 00:03:24.937 TEST_HEADER include/spdk/nbd.h 00:03:24.937 TEST_HEADER include/spdk/net.h 00:03:24.937 TEST_HEADER include/spdk/notify.h 00:03:24.937 TEST_HEADER include/spdk/nvme.h 00:03:24.937 TEST_HEADER include/spdk/nvme_intel.h 00:03:24.937 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:24.937 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:24.937 TEST_HEADER include/spdk/nvme_spec.h 00:03:24.937 CC test/app/bdev_svc/bdev_svc.o 00:03:24.937 TEST_HEADER include/spdk/nvme_zns.h 00:03:24.937 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:25.197 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:25.197 TEST_HEADER include/spdk/nvmf.h 00:03:25.197 TEST_HEADER include/spdk/nvmf_spec.h 00:03:25.197 TEST_HEADER include/spdk/nvmf_transport.h 00:03:25.197 TEST_HEADER include/spdk/opal.h 00:03:25.197 TEST_HEADER include/spdk/opal_spec.h 00:03:25.197 TEST_HEADER include/spdk/pci_ids.h 00:03:25.197 TEST_HEADER include/spdk/pipe.h 00:03:25.197 CC test/env/mem_callbacks/mem_callbacks.o 00:03:25.197 TEST_HEADER include/spdk/queue.h 00:03:25.197 TEST_HEADER include/spdk/reduce.h 00:03:25.197 TEST_HEADER include/spdk/rpc.h 00:03:25.197 TEST_HEADER include/spdk/scheduler.h 00:03:25.197 TEST_HEADER include/spdk/scsi.h 00:03:25.197 TEST_HEADER include/spdk/scsi_spec.h 00:03:25.197 TEST_HEADER include/spdk/sock.h 00:03:25.197 TEST_HEADER include/spdk/stdinc.h 00:03:25.197 TEST_HEADER include/spdk/string.h 00:03:25.197 TEST_HEADER include/spdk/thread.h 00:03:25.197 TEST_HEADER include/spdk/trace.h 00:03:25.197 TEST_HEADER include/spdk/trace_parser.h 00:03:25.197 TEST_HEADER include/spdk/tree.h 00:03:25.197 TEST_HEADER include/spdk/ublk.h 00:03:25.197 TEST_HEADER include/spdk/util.h 00:03:25.197 TEST_HEADER include/spdk/uuid.h 00:03:25.197 TEST_HEADER include/spdk/version.h 00:03:25.197 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:25.197 LINK zipf 00:03:25.197 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:25.197 TEST_HEADER include/spdk/vhost.h 00:03:25.197 TEST_HEADER include/spdk/vmd.h 00:03:25.197 TEST_HEADER include/spdk/xor.h 00:03:25.197 LINK poller_perf 00:03:25.197 TEST_HEADER include/spdk/zipf.h 00:03:25.197 CXX test/cpp_headers/accel.o 00:03:25.197 LINK rpc_client_test 00:03:25.197 LINK interrupt_tgt 00:03:25.197 LINK ioat_perf 00:03:25.465 LINK bdev_svc 00:03:25.465 LINK spdk_trace 00:03:25.465 CXX test/cpp_headers/accel_module.o 00:03:25.465 CC app/trace_record/trace_record.o 00:03:25.465 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:25.465 CC app/nvmf_tgt/nvmf_main.o 00:03:25.465 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:25.465 CC examples/ioat/verify/verify.o 00:03:25.739 LINK test_dma 00:03:25.739 CXX test/cpp_headers/assert.o 00:03:25.739 CC app/iscsi_tgt/iscsi_tgt.o 00:03:25.739 LINK mem_callbacks 00:03:25.739 LINK spdk_trace_record 00:03:25.739 LINK nvmf_tgt 00:03:25.739 LINK verify 00:03:25.739 CXX test/cpp_headers/barrier.o 00:03:25.739 CC examples/thread/thread/thread_ex.o 00:03:25.999 LINK iscsi_tgt 00:03:25.999 CC test/env/vtophys/vtophys.o 00:03:25.999 CXX test/cpp_headers/base64.o 00:03:25.999 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:25.999 CXX test/cpp_headers/bdev.o 00:03:25.999 CC test/env/memory/memory_ut.o 00:03:25.999 LINK nvme_fuzz 00:03:26.258 LINK thread 00:03:26.258 CC app/spdk_tgt/spdk_tgt.o 00:03:26.258 LINK vtophys 00:03:26.258 LINK env_dpdk_post_init 00:03:26.258 CXX test/cpp_headers/bdev_module.o 00:03:26.258 CXX test/cpp_headers/bdev_zone.o 00:03:26.258 CXX test/cpp_headers/bit_array.o 00:03:26.258 CXX test/cpp_headers/bit_pool.o 00:03:26.517 CC examples/sock/hello_world/hello_sock.o 00:03:26.517 LINK spdk_tgt 00:03:26.517 CXX test/cpp_headers/blob_bdev.o 00:03:26.517 CXX test/cpp_headers/blobfs_bdev.o 00:03:26.517 CC app/spdk_nvme_perf/perf.o 00:03:26.517 CC app/spdk_lspci/spdk_lspci.o 00:03:26.517 CC app/spdk_nvme_identify/identify.o 00:03:26.775 CXX test/cpp_headers/blobfs.o 00:03:26.775 CC app/spdk_nvme_discover/discovery_aer.o 00:03:26.775 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:26.775 LINK spdk_lspci 00:03:26.775 CC app/spdk_top/spdk_top.o 00:03:26.775 LINK hello_sock 00:03:26.775 CXX test/cpp_headers/blob.o 00:03:26.775 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:27.034 LINK spdk_nvme_discover 00:03:27.034 CXX test/cpp_headers/conf.o 00:03:27.293 CC test/event/event_perf/event_perf.o 00:03:27.293 CC examples/vmd/lsvmd/lsvmd.o 00:03:27.293 CC test/event/reactor/reactor.o 00:03:27.293 CXX test/cpp_headers/config.o 00:03:27.293 CXX test/cpp_headers/cpuset.o 00:03:27.293 LINK event_perf 00:03:27.293 LINK lsvmd 00:03:27.293 LINK vhost_fuzz 00:03:27.293 LINK reactor 00:03:27.552 CXX test/cpp_headers/crc16.o 00:03:27.552 LINK memory_ut 00:03:27.552 CC test/app/histogram_perf/histogram_perf.o 00:03:27.552 CC test/app/jsoncat/jsoncat.o 00:03:27.552 CXX test/cpp_headers/crc32.o 00:03:27.552 CC examples/vmd/led/led.o 00:03:27.811 LINK iscsi_fuzz 00:03:27.811 CC test/event/reactor_perf/reactor_perf.o 00:03:27.811 LINK spdk_nvme_perf 00:03:27.811 LINK histogram_perf 00:03:27.811 LINK spdk_nvme_identify 00:03:27.811 CC test/env/pci/pci_ut.o 00:03:27.811 LINK jsoncat 00:03:27.811 CXX test/cpp_headers/crc64.o 00:03:27.811 LINK led 00:03:28.070 LINK reactor_perf 00:03:28.070 LINK spdk_top 00:03:28.070 CXX test/cpp_headers/dif.o 00:03:28.070 CC test/event/app_repeat/app_repeat.o 00:03:28.070 CC test/event/scheduler/scheduler.o 00:03:28.070 CC test/app/stub/stub.o 00:03:28.070 CC app/vhost/vhost.o 00:03:28.328 CXX test/cpp_headers/dma.o 00:03:28.328 CC test/nvme/aer/aer.o 00:03:28.328 LINK app_repeat 00:03:28.328 CC test/nvme/reset/reset.o 00:03:28.328 CC examples/idxd/perf/perf.o 00:03:28.328 LINK pci_ut 00:03:28.328 LINK vhost 00:03:28.328 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:28.328 LINK stub 00:03:28.328 CXX test/cpp_headers/endian.o 00:03:28.328 LINK scheduler 00:03:28.586 CXX test/cpp_headers/env_dpdk.o 00:03:28.586 CC test/nvme/sgl/sgl.o 00:03:28.586 CXX test/cpp_headers/env.o 00:03:28.586 LINK aer 00:03:28.586 LINK reset 00:03:28.586 CC app/spdk_dd/spdk_dd.o 00:03:28.586 LINK idxd_perf 00:03:28.845 LINK hello_fsdev 00:03:28.845 CC test/nvme/e2edp/nvme_dp.o 00:03:28.845 CXX test/cpp_headers/event.o 00:03:28.845 CXX test/cpp_headers/fd_group.o 00:03:28.845 CXX test/cpp_headers/fd.o 00:03:28.845 CC test/nvme/overhead/overhead.o 00:03:28.845 LINK sgl 00:03:28.845 CC test/accel/dif/dif.o 00:03:28.845 CXX test/cpp_headers/file.o 00:03:29.105 CXX test/cpp_headers/fsdev.o 00:03:29.105 LINK nvme_dp 00:03:29.105 CC test/nvme/err_injection/err_injection.o 00:03:29.105 CC test/nvme/startup/startup.o 00:03:29.105 LINK spdk_dd 00:03:29.105 CC examples/accel/perf/accel_perf.o 00:03:29.364 LINK overhead 00:03:29.364 CC examples/nvme/hello_world/hello_world.o 00:03:29.364 CC examples/blob/hello_world/hello_blob.o 00:03:29.364 CXX test/cpp_headers/fsdev_module.o 00:03:29.364 LINK startup 00:03:29.364 LINK err_injection 00:03:29.623 CC examples/blob/cli/blobcli.o 00:03:29.623 CXX test/cpp_headers/ftl.o 00:03:29.623 CXX test/cpp_headers/fuse_dispatcher.o 00:03:29.623 LINK hello_world 00:03:29.623 CXX test/cpp_headers/gpt_spec.o 00:03:29.623 LINK hello_blob 00:03:29.623 CC app/fio/nvme/fio_plugin.o 00:03:29.623 CC test/nvme/reserve/reserve.o 00:03:29.623 CXX test/cpp_headers/hexlify.o 00:03:29.882 CC test/nvme/simple_copy/simple_copy.o 00:03:29.882 LINK dif 00:03:29.882 LINK accel_perf 00:03:29.882 CC examples/nvme/reconnect/reconnect.o 00:03:29.882 CXX test/cpp_headers/histogram_data.o 00:03:29.882 LINK reserve 00:03:29.882 CC test/blobfs/mkfs/mkfs.o 00:03:30.140 LINK simple_copy 00:03:30.140 CXX test/cpp_headers/idxd.o 00:03:30.140 CXX test/cpp_headers/idxd_spec.o 00:03:30.140 CC test/lvol/esnap/esnap.o 00:03:30.140 LINK blobcli 00:03:30.140 LINK mkfs 00:03:30.398 LINK spdk_nvme 00:03:30.398 CXX test/cpp_headers/init.o 00:03:30.398 LINK reconnect 00:03:30.398 CC test/nvme/connect_stress/connect_stress.o 00:03:30.398 CC examples/bdev/hello_world/hello_bdev.o 00:03:30.398 CC test/nvme/boot_partition/boot_partition.o 00:03:30.398 CC test/bdev/bdevio/bdevio.o 00:03:30.398 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:30.703 CXX test/cpp_headers/ioat.o 00:03:30.703 CC app/fio/bdev/fio_plugin.o 00:03:30.703 CXX test/cpp_headers/ioat_spec.o 00:03:30.703 LINK boot_partition 00:03:30.703 LINK connect_stress 00:03:30.703 CC examples/bdev/bdevperf/bdevperf.o 00:03:30.703 LINK hello_bdev 00:03:30.703 CXX test/cpp_headers/iscsi_spec.o 00:03:30.962 CC test/nvme/compliance/nvme_compliance.o 00:03:30.962 CC test/nvme/fused_ordering/fused_ordering.o 00:03:30.962 CC examples/nvme/arbitration/arbitration.o 00:03:30.962 LINK bdevio 00:03:30.962 CC examples/nvme/hotplug/hotplug.o 00:03:30.962 CXX test/cpp_headers/json.o 00:03:31.221 LINK nvme_manage 00:03:31.221 LINK fused_ordering 00:03:31.221 CXX test/cpp_headers/jsonrpc.o 00:03:31.221 LINK nvme_compliance 00:03:31.221 LINK hotplug 00:03:31.221 LINK spdk_bdev 00:03:31.221 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:31.221 CXX test/cpp_headers/keyring.o 00:03:31.479 LINK arbitration 00:03:31.479 CXX test/cpp_headers/keyring_module.o 00:03:31.479 CXX test/cpp_headers/likely.o 00:03:31.479 CC examples/nvme/abort/abort.o 00:03:31.479 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:31.479 LINK cmb_copy 00:03:31.479 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:31.479 CXX test/cpp_headers/log.o 00:03:31.479 CXX test/cpp_headers/lvol.o 00:03:31.738 CC test/nvme/fdp/fdp.o 00:03:31.738 LINK bdevperf 00:03:31.738 CC test/nvme/cuse/cuse.o 00:03:31.738 LINK doorbell_aers 00:03:31.738 LINK pmr_persistence 00:03:31.738 CXX test/cpp_headers/md5.o 00:03:31.738 CXX test/cpp_headers/memory.o 00:03:31.738 CXX test/cpp_headers/mmio.o 00:03:31.738 LINK abort 00:03:31.996 CXX test/cpp_headers/nbd.o 00:03:31.996 CXX test/cpp_headers/net.o 00:03:31.996 CXX test/cpp_headers/notify.o 00:03:31.996 CXX test/cpp_headers/nvme.o 00:03:31.996 CXX test/cpp_headers/nvme_intel.o 00:03:31.996 CXX test/cpp_headers/nvme_ocssd.o 00:03:31.996 LINK fdp 00:03:31.996 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:31.996 CXX test/cpp_headers/nvme_spec.o 00:03:32.254 CXX test/cpp_headers/nvme_zns.o 00:03:32.254 CXX test/cpp_headers/nvmf_cmd.o 00:03:32.254 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:32.254 CXX test/cpp_headers/nvmf.o 00:03:32.254 CXX test/cpp_headers/nvmf_spec.o 00:03:32.254 CXX test/cpp_headers/nvmf_transport.o 00:03:32.254 CC examples/nvmf/nvmf/nvmf.o 00:03:32.254 CXX test/cpp_headers/opal.o 00:03:32.254 CXX test/cpp_headers/opal_spec.o 00:03:32.512 CXX test/cpp_headers/pci_ids.o 00:03:32.512 CXX test/cpp_headers/pipe.o 00:03:32.512 CXX test/cpp_headers/queue.o 00:03:32.512 CXX test/cpp_headers/reduce.o 00:03:32.512 CXX test/cpp_headers/rpc.o 00:03:32.512 CXX test/cpp_headers/scheduler.o 00:03:32.512 CXX test/cpp_headers/scsi.o 00:03:32.512 CXX test/cpp_headers/scsi_spec.o 00:03:32.512 CXX test/cpp_headers/sock.o 00:03:32.512 CXX test/cpp_headers/stdinc.o 00:03:32.512 CXX test/cpp_headers/string.o 00:03:32.769 LINK nvmf 00:03:32.769 CXX test/cpp_headers/thread.o 00:03:32.769 CXX test/cpp_headers/trace.o 00:03:32.769 CXX test/cpp_headers/trace_parser.o 00:03:32.769 CXX test/cpp_headers/tree.o 00:03:32.769 CXX test/cpp_headers/ublk.o 00:03:32.769 CXX test/cpp_headers/util.o 00:03:32.769 CXX test/cpp_headers/uuid.o 00:03:32.769 CXX test/cpp_headers/version.o 00:03:32.769 CXX test/cpp_headers/vfio_user_pci.o 00:03:33.026 CXX test/cpp_headers/vfio_user_spec.o 00:03:33.026 CXX test/cpp_headers/vhost.o 00:03:33.026 CXX test/cpp_headers/vmd.o 00:03:33.026 CXX test/cpp_headers/xor.o 00:03:33.026 CXX test/cpp_headers/zipf.o 00:03:33.963 LINK cuse 00:03:37.251 LINK esnap 00:03:37.510 00:03:37.510 real 1m40.688s 00:03:37.510 user 9m7.340s 00:03:37.510 sys 1m59.312s 00:03:37.510 18:00:49 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:37.510 18:00:49 make -- common/autotest_common.sh@10 -- $ set +x 00:03:37.510 ************************************ 00:03:37.510 END TEST make 00:03:37.510 ************************************ 00:03:37.771 18:00:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:37.771 18:00:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:37.771 18:00:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:37.771 18:00:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.771 18:00:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:37.771 18:00:49 -- pm/common@44 -- $ pid=5472 00:03:37.771 18:00:49 -- pm/common@50 -- $ kill -TERM 5472 00:03:37.771 18:00:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.771 18:00:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:37.771 18:00:49 -- pm/common@44 -- $ pid=5474 00:03:37.771 18:00:49 -- pm/common@50 -- $ kill -TERM 5474 00:03:37.771 18:00:49 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:37.771 18:00:49 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:37.771 18:00:49 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:37.771 18:00:49 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:37.771 18:00:49 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:37.771 18:00:49 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:37.771 18:00:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.771 18:00:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.771 18:00:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.771 18:00:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.771 18:00:49 -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.771 18:00:49 -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.771 18:00:49 -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.771 18:00:49 -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.771 18:00:49 -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.771 18:00:49 -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.771 18:00:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.771 18:00:49 -- scripts/common.sh@344 -- # case "$op" in 00:03:37.771 18:00:49 -- scripts/common.sh@345 -- # : 1 00:03:37.771 18:00:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.771 18:00:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.771 18:00:49 -- scripts/common.sh@365 -- # decimal 1 00:03:37.771 18:00:49 -- scripts/common.sh@353 -- # local d=1 00:03:37.771 18:00:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.771 18:00:49 -- scripts/common.sh@355 -- # echo 1 00:03:37.771 18:00:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.771 18:00:49 -- scripts/common.sh@366 -- # decimal 2 00:03:37.771 18:00:49 -- scripts/common.sh@353 -- # local d=2 00:03:37.771 18:00:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.771 18:00:49 -- scripts/common.sh@355 -- # echo 2 00:03:37.771 18:00:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.771 18:00:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.771 18:00:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.771 18:00:49 -- scripts/common.sh@368 -- # return 0 00:03:37.771 18:00:49 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.771 18:00:49 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:37.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.771 --rc genhtml_branch_coverage=1 00:03:37.771 --rc genhtml_function_coverage=1 00:03:37.771 --rc genhtml_legend=1 00:03:37.771 --rc geninfo_all_blocks=1 00:03:37.771 --rc geninfo_unexecuted_blocks=1 00:03:37.771 00:03:37.771 ' 00:03:37.771 18:00:49 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:37.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.771 --rc genhtml_branch_coverage=1 00:03:37.771 --rc genhtml_function_coverage=1 00:03:37.771 --rc genhtml_legend=1 00:03:37.771 --rc geninfo_all_blocks=1 00:03:37.771 --rc geninfo_unexecuted_blocks=1 00:03:37.771 00:03:37.771 ' 00:03:37.771 18:00:49 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:37.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.771 --rc genhtml_branch_coverage=1 00:03:37.771 --rc genhtml_function_coverage=1 00:03:37.771 --rc genhtml_legend=1 00:03:37.771 --rc geninfo_all_blocks=1 00:03:37.771 --rc geninfo_unexecuted_blocks=1 00:03:37.771 00:03:37.771 ' 00:03:37.771 18:00:49 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:37.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.771 --rc genhtml_branch_coverage=1 00:03:37.771 --rc genhtml_function_coverage=1 00:03:37.771 --rc genhtml_legend=1 00:03:37.771 --rc geninfo_all_blocks=1 00:03:37.771 --rc geninfo_unexecuted_blocks=1 00:03:37.771 00:03:37.771 ' 00:03:37.771 18:00:49 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:37.771 18:00:49 -- nvmf/common.sh@7 -- # uname -s 00:03:37.771 18:00:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.771 18:00:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.771 18:00:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.771 18:00:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.771 18:00:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.771 18:00:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.771 18:00:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.771 18:00:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.771 18:00:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.771 18:00:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:38.032 18:00:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65089fda-b69e-46f5-994f-34d45af0c95c 00:03:38.032 18:00:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=65089fda-b69e-46f5-994f-34d45af0c95c 00:03:38.032 18:00:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:38.032 18:00:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:38.032 18:00:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:38.032 18:00:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:38.032 18:00:49 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:38.032 18:00:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:38.032 18:00:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:38.032 18:00:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:38.032 18:00:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:38.032 18:00:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.032 18:00:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.032 18:00:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.032 18:00:49 -- paths/export.sh@5 -- # export PATH 00:03:38.032 18:00:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.032 18:00:49 -- nvmf/common.sh@51 -- # : 0 00:03:38.032 18:00:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:38.032 18:00:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:38.032 18:00:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:38.032 18:00:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:38.032 18:00:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:38.032 18:00:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:38.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:38.032 18:00:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:38.032 18:00:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:38.032 18:00:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:38.032 18:00:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:38.032 18:00:49 -- spdk/autotest.sh@32 -- # uname -s 00:03:38.032 18:00:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:38.032 18:00:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:38.032 18:00:49 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.032 18:00:49 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:38.032 18:00:49 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:38.032 18:00:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:38.032 18:00:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:38.032 18:00:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:38.032 18:00:50 -- spdk/autotest.sh@48 -- # udevadm_pid=54611 00:03:38.032 18:00:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:38.032 18:00:50 -- pm/common@17 -- # local monitor 00:03:38.032 18:00:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.032 18:00:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:38.032 18:00:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.032 18:00:50 -- pm/common@25 -- # sleep 1 00:03:38.032 18:00:50 -- pm/common@21 -- # date +%s 00:03:38.032 18:00:50 -- pm/common@21 -- # date +%s 00:03:38.032 18:00:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733508050 00:03:38.032 18:00:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733508050 00:03:38.032 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733508050_collect-vmstat.pm.log 00:03:38.032 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733508050_collect-cpu-load.pm.log 00:03:38.972 18:00:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:38.972 18:00:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:38.972 18:00:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.972 18:00:51 -- common/autotest_common.sh@10 -- # set +x 00:03:38.972 18:00:51 -- spdk/autotest.sh@59 -- # create_test_list 00:03:38.972 18:00:51 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:38.972 18:00:51 -- common/autotest_common.sh@10 -- # set +x 00:03:38.972 18:00:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:38.972 18:00:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:38.972 18:00:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:38.972 18:00:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:38.972 18:00:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:38.972 18:00:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:38.972 18:00:51 -- common/autotest_common.sh@1457 -- # uname 00:03:38.972 18:00:51 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:38.972 18:00:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:38.972 18:00:51 -- common/autotest_common.sh@1477 -- # uname 00:03:38.972 18:00:51 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:38.972 18:00:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:38.972 18:00:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:39.231 lcov: LCOV version 1.15 00:03:39.231 18:00:51 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:54.202 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:54.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:12.307 18:01:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:12.308 18:01:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.308 18:01:22 -- common/autotest_common.sh@10 -- # set +x 00:04:12.308 18:01:22 -- spdk/autotest.sh@78 -- # rm -f 00:04:12.308 18:01:22 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.308 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:12.308 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:12.308 18:01:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:12.308 18:01:23 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:12.308 18:01:23 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:12.308 18:01:23 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:12.308 18:01:23 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:12.308 18:01:23 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:12.308 18:01:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:12.308 18:01:23 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:12.308 18:01:23 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:12.308 18:01:23 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:12.308 18:01:23 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:12.308 18:01:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.308 18:01:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.308 18:01:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:12.308 18:01:23 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:12.308 18:01:23 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:12.308 18:01:23 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:12.308 18:01:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:12.308 18:01:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:12.308 18:01:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.308 18:01:23 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:12.308 18:01:23 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:12.308 18:01:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:12.308 18:01:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:12.308 18:01:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.308 18:01:23 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:12.308 18:01:23 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:12.308 18:01:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:12.308 18:01:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:12.308 18:01:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.308 18:01:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:12.308 18:01:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.308 18:01:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.308 18:01:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:12.308 18:01:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:12.308 18:01:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.308 No valid GPT data, bailing 00:04:12.308 18:01:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.308 18:01:23 -- scripts/common.sh@394 -- # pt= 00:04:12.308 18:01:23 -- scripts/common.sh@395 -- # return 1 00:04:12.308 18:01:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.308 1+0 records in 00:04:12.308 1+0 records out 00:04:12.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00724526 s, 145 MB/s 00:04:12.308 18:01:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.308 18:01:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.308 18:01:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:12.308 18:01:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:12.308 18:01:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:12.308 No valid GPT data, bailing 00:04:12.308 18:01:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:12.308 18:01:23 -- scripts/common.sh@394 -- # pt= 00:04:12.308 18:01:23 -- scripts/common.sh@395 -- # return 1 00:04:12.308 18:01:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:12.308 1+0 records in 00:04:12.308 1+0 records out 00:04:12.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618894 s, 169 MB/s 00:04:12.308 18:01:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.308 18:01:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.308 18:01:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:12.308 18:01:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:12.308 18:01:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:12.308 No valid GPT data, bailing 00:04:12.308 18:01:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:12.308 18:01:23 -- scripts/common.sh@394 -- # pt= 00:04:12.308 18:01:23 -- scripts/common.sh@395 -- # return 1 00:04:12.308 18:01:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:12.308 1+0 records in 00:04:12.308 1+0 records out 00:04:12.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428732 s, 245 MB/s 00:04:12.308 18:01:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.308 18:01:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.308 18:01:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:12.308 18:01:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:12.308 18:01:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:12.308 No valid GPT data, bailing 00:04:12.308 18:01:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:12.308 18:01:23 -- scripts/common.sh@394 -- # pt= 00:04:12.308 18:01:23 -- scripts/common.sh@395 -- # return 1 00:04:12.308 18:01:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:12.308 1+0 records in 00:04:12.308 1+0 records out 00:04:12.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634418 s, 165 MB/s 00:04:12.308 18:01:23 -- spdk/autotest.sh@105 -- # sync 00:04:12.308 18:01:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.308 18:01:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.308 18:01:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.843 18:01:26 -- spdk/autotest.sh@111 -- # uname -s 00:04:14.843 18:01:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:14.843 18:01:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:14.843 18:01:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:15.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.410 Hugepages 00:04:15.410 node hugesize free / total 00:04:15.410 node0 1048576kB 0 / 0 00:04:15.410 node0 2048kB 0 / 0 00:04:15.410 00:04:15.410 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.669 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:15.669 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:15.928 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:15.928 18:01:27 -- spdk/autotest.sh@117 -- # uname -s 00:04:15.928 18:01:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:15.928 18:01:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:15.928 18:01:27 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.865 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.123 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.123 18:01:29 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:18.170 18:01:30 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:18.170 18:01:30 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:18.170 18:01:30 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:18.170 18:01:30 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:18.170 18:01:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:18.170 18:01:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:18.170 18:01:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.170 18:01:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:18.170 18:01:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.170 18:01:30 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:18.170 18:01:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:18.170 18:01:30 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.738 Waiting for block devices as requested 00:04:18.738 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.738 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.998 18:01:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:18.998 18:01:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:18.998 18:01:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:18.998 18:01:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:18.998 18:01:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:18.998 18:01:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:18.998 18:01:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:18.998 18:01:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:18.998 18:01:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:18.998 18:01:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:18.998 18:01:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:18.998 18:01:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:18.998 18:01:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:18.998 18:01:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:18.998 18:01:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:18.998 18:01:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:18.998 18:01:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:18.998 18:01:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:18.998 18:01:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:18.998 18:01:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:18.998 18:01:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:18.998 18:01:30 -- common/autotest_common.sh@1543 -- # continue 00:04:18.998 18:01:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:18.998 18:01:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:18.998 18:01:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:18.998 18:01:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:18.998 18:01:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:18.998 18:01:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:18.998 18:01:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:18.998 18:01:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:18.998 18:01:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:18.998 18:01:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:18.998 18:01:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:18.998 18:01:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:18.998 18:01:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:18.998 18:01:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:18.998 18:01:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:18.998 18:01:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:18.998 18:01:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:18.998 18:01:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:18.998 18:01:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:18.998 18:01:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:18.998 18:01:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:18.998 18:01:31 -- common/autotest_common.sh@1543 -- # continue 00:04:18.998 18:01:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:18.998 18:01:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.998 18:01:31 -- common/autotest_common.sh@10 -- # set +x 00:04:18.998 18:01:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:18.998 18:01:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.998 18:01:31 -- common/autotest_common.sh@10 -- # set +x 00:04:18.998 18:01:31 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.933 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.933 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.191 18:01:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:20.191 18:01:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.191 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:04:20.191 18:01:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:20.191 18:01:32 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:20.191 18:01:32 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:20.191 18:01:32 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:20.191 18:01:32 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:20.191 18:01:32 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:20.191 18:01:32 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:20.191 18:01:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:20.191 18:01:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:20.191 18:01:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:20.191 18:01:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.191 18:01:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:20.191 18:01:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:20.191 18:01:32 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:20.191 18:01:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:20.191 18:01:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:20.191 18:01:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:20.191 18:01:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:20.191 18:01:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:20.191 18:01:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:20.191 18:01:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:20.191 18:01:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:20.191 18:01:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:20.191 18:01:32 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:20.191 18:01:32 -- common/autotest_common.sh@1572 -- # return 0 00:04:20.191 18:01:32 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:20.191 18:01:32 -- common/autotest_common.sh@1580 -- # return 0 00:04:20.191 18:01:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:20.191 18:01:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:20.191 18:01:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.191 18:01:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.191 18:01:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:20.191 18:01:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.191 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:04:20.191 18:01:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:20.191 18:01:32 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:20.191 18:01:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.191 18:01:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.191 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:04:20.191 ************************************ 00:04:20.191 START TEST env 00:04:20.191 ************************************ 00:04:20.191 18:01:32 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:20.449 * Looking for test storage... 00:04:20.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:20.449 18:01:32 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:20.449 18:01:32 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:20.449 18:01:32 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:20.449 18:01:32 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:20.449 18:01:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.449 18:01:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.449 18:01:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.449 18:01:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.449 18:01:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.449 18:01:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.449 18:01:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.449 18:01:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.449 18:01:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.449 18:01:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.449 18:01:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.449 18:01:32 env -- scripts/common.sh@344 -- # case "$op" in 00:04:20.449 18:01:32 env -- scripts/common.sh@345 -- # : 1 00:04:20.449 18:01:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.449 18:01:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.449 18:01:32 env -- scripts/common.sh@365 -- # decimal 1 00:04:20.449 18:01:32 env -- scripts/common.sh@353 -- # local d=1 00:04:20.449 18:01:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.449 18:01:32 env -- scripts/common.sh@355 -- # echo 1 00:04:20.449 18:01:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.449 18:01:32 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.449 18:01:32 env -- scripts/common.sh@353 -- # local d=2 00:04:20.449 18:01:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.449 18:01:32 env -- scripts/common.sh@355 -- # echo 2 00:04:20.449 18:01:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.449 18:01:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.449 18:01:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.449 18:01:32 env -- scripts/common.sh@368 -- # return 0 00:04:20.449 18:01:32 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.449 18:01:32 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:20.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.449 --rc genhtml_branch_coverage=1 00:04:20.449 --rc genhtml_function_coverage=1 00:04:20.449 --rc genhtml_legend=1 00:04:20.449 --rc geninfo_all_blocks=1 00:04:20.449 --rc geninfo_unexecuted_blocks=1 00:04:20.449 00:04:20.449 ' 00:04:20.449 18:01:32 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:20.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.449 --rc genhtml_branch_coverage=1 00:04:20.449 --rc genhtml_function_coverage=1 00:04:20.449 --rc genhtml_legend=1 00:04:20.449 --rc geninfo_all_blocks=1 00:04:20.449 --rc geninfo_unexecuted_blocks=1 00:04:20.449 00:04:20.449 ' 00:04:20.450 18:01:32 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:20.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.450 --rc genhtml_branch_coverage=1 00:04:20.450 --rc genhtml_function_coverage=1 00:04:20.450 --rc genhtml_legend=1 00:04:20.450 --rc geninfo_all_blocks=1 00:04:20.450 --rc geninfo_unexecuted_blocks=1 00:04:20.450 00:04:20.450 ' 00:04:20.450 18:01:32 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:20.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.450 --rc genhtml_branch_coverage=1 00:04:20.450 --rc genhtml_function_coverage=1 00:04:20.450 --rc genhtml_legend=1 00:04:20.450 --rc geninfo_all_blocks=1 00:04:20.450 --rc geninfo_unexecuted_blocks=1 00:04:20.450 00:04:20.450 ' 00:04:20.450 18:01:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.450 18:01:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.450 18:01:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.450 18:01:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.450 ************************************ 00:04:20.450 START TEST env_memory 00:04:20.450 ************************************ 00:04:20.450 18:01:32 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.450 00:04:20.450 00:04:20.450 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.450 http://cunit.sourceforge.net/ 00:04:20.450 00:04:20.450 00:04:20.450 Suite: memory 00:04:20.706 Test: alloc and free memory map ...[2024-12-06 18:01:32.637272] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.706 passed 00:04:20.706 Test: mem map translation ...[2024-12-06 18:01:32.690859] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.706 [2024-12-06 18:01:32.690982] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.706 [2024-12-06 18:01:32.691114] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.706 [2024-12-06 18:01:32.691198] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.706 passed 00:04:20.706 Test: mem map registration ...[2024-12-06 18:01:32.774850] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.706 [2024-12-06 18:01:32.774983] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.706 passed 00:04:20.964 Test: mem map adjacent registrations ...passed 00:04:20.964 00:04:20.964 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.964 suites 1 1 n/a 0 0 00:04:20.964 tests 4 4 4 0 0 00:04:20.964 asserts 152 152 152 0 n/a 00:04:20.964 00:04:20.964 Elapsed time = 0.294 seconds 00:04:20.964 00:04:20.964 ************************************ 00:04:20.964 END TEST env_memory 00:04:20.964 ************************************ 00:04:20.964 real 0m0.350s 00:04:20.964 user 0m0.307s 00:04:20.964 sys 0m0.031s 00:04:20.964 18:01:32 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.964 18:01:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.964 18:01:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.964 18:01:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.964 18:01:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.964 18:01:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.964 ************************************ 00:04:20.964 START TEST env_vtophys 00:04:20.964 ************************************ 00:04:20.964 18:01:32 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.964 EAL: lib.eal log level changed from notice to debug 00:04:20.964 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.964 EAL: Detected lcore 1 as core 0 on socket 0 00:04:20.964 EAL: Detected lcore 2 as core 0 on socket 0 00:04:20.964 EAL: Detected lcore 3 as core 0 on socket 0 00:04:20.964 EAL: Detected lcore 4 as core 0 on socket 0 00:04:20.964 EAL: Detected lcore 5 as core 0 on socket 0 00:04:20.964 EAL: Detected lcore 6 as core 0 on socket 0 00:04:20.964 EAL: Detected lcore 7 as core 0 on socket 0 00:04:20.964 EAL: Detected lcore 8 as core 0 on socket 0 00:04:20.964 EAL: Detected lcore 9 as core 0 on socket 0 00:04:20.964 EAL: Maximum logical cores by configuration: 128 00:04:20.964 EAL: Detected CPU lcores: 10 00:04:20.964 EAL: Detected NUMA nodes: 1 00:04:20.964 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.964 EAL: Detected shared linkage of DPDK 00:04:20.964 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.964 EAL: Selected IOVA mode 'PA' 00:04:20.964 EAL: Probing VFIO support... 00:04:20.965 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.965 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:20.965 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.965 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.965 EAL: Setting up physically contiguous memory... 00:04:20.965 EAL: Setting maximum number of open files to 524288 00:04:20.965 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.965 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.965 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.965 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.965 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.965 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.965 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.965 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.965 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.965 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.965 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.965 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.965 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.965 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.965 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.965 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.965 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.965 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.965 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.965 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.965 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.965 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.965 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.965 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.965 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.965 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.965 EAL: Hugepages will be freed exactly as allocated. 00:04:20.965 EAL: No shared files mode enabled, IPC is disabled 00:04:20.965 EAL: No shared files mode enabled, IPC is disabled 00:04:21.224 EAL: TSC frequency is ~2290000 KHz 00:04:21.224 EAL: Main lcore 0 is ready (tid=7f2ebf0d6a40;cpuset=[0]) 00:04:21.224 EAL: Trying to obtain current memory policy. 00:04:21.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.224 EAL: Restoring previous memory policy: 0 00:04:21.224 EAL: request: mp_malloc_sync 00:04:21.224 EAL: No shared files mode enabled, IPC is disabled 00:04:21.224 EAL: Heap on socket 0 was expanded by 2MB 00:04:21.224 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:21.224 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:21.224 EAL: Mem event callback 'spdk:(nil)' registered 00:04:21.224 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:21.224 00:04:21.224 00:04:21.224 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.224 http://cunit.sourceforge.net/ 00:04:21.224 00:04:21.224 00:04:21.224 Suite: components_suite 00:04:21.792 Test: vtophys_malloc_test ...passed 00:04:21.792 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:21.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.792 EAL: Restoring previous memory policy: 4 00:04:21.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.792 EAL: request: mp_malloc_sync 00:04:21.792 EAL: No shared files mode enabled, IPC is disabled 00:04:21.792 EAL: Heap on socket 0 was expanded by 4MB 00:04:21.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.792 EAL: request: mp_malloc_sync 00:04:21.792 EAL: No shared files mode enabled, IPC is disabled 00:04:21.792 EAL: Heap on socket 0 was shrunk by 4MB 00:04:21.792 EAL: Trying to obtain current memory policy. 00:04:21.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.792 EAL: Restoring previous memory policy: 4 00:04:21.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.792 EAL: request: mp_malloc_sync 00:04:21.792 EAL: No shared files mode enabled, IPC is disabled 00:04:21.792 EAL: Heap on socket 0 was expanded by 6MB 00:04:21.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.792 EAL: request: mp_malloc_sync 00:04:21.792 EAL: No shared files mode enabled, IPC is disabled 00:04:21.792 EAL: Heap on socket 0 was shrunk by 6MB 00:04:21.792 EAL: Trying to obtain current memory policy. 00:04:21.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.792 EAL: Restoring previous memory policy: 4 00:04:21.792 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.792 EAL: request: mp_malloc_sync 00:04:21.793 EAL: No shared files mode enabled, IPC is disabled 00:04:21.793 EAL: Heap on socket 0 was expanded by 10MB 00:04:21.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.793 EAL: request: mp_malloc_sync 00:04:21.793 EAL: No shared files mode enabled, IPC is disabled 00:04:21.793 EAL: Heap on socket 0 was shrunk by 10MB 00:04:21.793 EAL: Trying to obtain current memory policy. 00:04:21.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.793 EAL: Restoring previous memory policy: 4 00:04:21.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.793 EAL: request: mp_malloc_sync 00:04:21.793 EAL: No shared files mode enabled, IPC is disabled 00:04:21.793 EAL: Heap on socket 0 was expanded by 18MB 00:04:21.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.793 EAL: request: mp_malloc_sync 00:04:21.793 EAL: No shared files mode enabled, IPC is disabled 00:04:21.793 EAL: Heap on socket 0 was shrunk by 18MB 00:04:21.793 EAL: Trying to obtain current memory policy. 00:04:21.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.793 EAL: Restoring previous memory policy: 4 00:04:21.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.793 EAL: request: mp_malloc_sync 00:04:21.793 EAL: No shared files mode enabled, IPC is disabled 00:04:21.793 EAL: Heap on socket 0 was expanded by 34MB 00:04:22.052 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.052 EAL: request: mp_malloc_sync 00:04:22.052 EAL: No shared files mode enabled, IPC is disabled 00:04:22.052 EAL: Heap on socket 0 was shrunk by 34MB 00:04:22.052 EAL: Trying to obtain current memory policy. 00:04:22.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.052 EAL: Restoring previous memory policy: 4 00:04:22.052 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.052 EAL: request: mp_malloc_sync 00:04:22.052 EAL: No shared files mode enabled, IPC is disabled 00:04:22.052 EAL: Heap on socket 0 was expanded by 66MB 00:04:22.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.312 EAL: request: mp_malloc_sync 00:04:22.312 EAL: No shared files mode enabled, IPC is disabled 00:04:22.312 EAL: Heap on socket 0 was shrunk by 66MB 00:04:22.312 EAL: Trying to obtain current memory policy. 00:04:22.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.312 EAL: Restoring previous memory policy: 4 00:04:22.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.312 EAL: request: mp_malloc_sync 00:04:22.312 EAL: No shared files mode enabled, IPC is disabled 00:04:22.312 EAL: Heap on socket 0 was expanded by 130MB 00:04:22.571 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.571 EAL: request: mp_malloc_sync 00:04:22.571 EAL: No shared files mode enabled, IPC is disabled 00:04:22.571 EAL: Heap on socket 0 was shrunk by 130MB 00:04:22.830 EAL: Trying to obtain current memory policy. 00:04:22.830 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.101 EAL: Restoring previous memory policy: 4 00:04:23.101 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.101 EAL: request: mp_malloc_sync 00:04:23.101 EAL: No shared files mode enabled, IPC is disabled 00:04:23.101 EAL: Heap on socket 0 was expanded by 258MB 00:04:23.672 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.672 EAL: request: mp_malloc_sync 00:04:23.672 EAL: No shared files mode enabled, IPC is disabled 00:04:23.672 EAL: Heap on socket 0 was shrunk by 258MB 00:04:24.241 EAL: Trying to obtain current memory policy. 00:04:24.241 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.241 EAL: Restoring previous memory policy: 4 00:04:24.241 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.241 EAL: request: mp_malloc_sync 00:04:24.241 EAL: No shared files mode enabled, IPC is disabled 00:04:24.241 EAL: Heap on socket 0 was expanded by 514MB 00:04:25.621 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.621 EAL: request: mp_malloc_sync 00:04:25.621 EAL: No shared files mode enabled, IPC is disabled 00:04:25.621 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.559 EAL: Trying to obtain current memory policy. 00:04:26.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.134 EAL: Restoring previous memory policy: 4 00:04:27.134 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.134 EAL: request: mp_malloc_sync 00:04:27.134 EAL: No shared files mode enabled, IPC is disabled 00:04:27.134 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.672 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.672 EAL: request: mp_malloc_sync 00:04:29.672 EAL: No shared files mode enabled, IPC is disabled 00:04:29.672 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.576 passed 00:04:31.576 00:04:31.576 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.576 suites 1 1 n/a 0 0 00:04:31.576 tests 2 2 2 0 0 00:04:31.576 asserts 5719 5719 5719 0 n/a 00:04:31.576 00:04:31.576 Elapsed time = 10.059 seconds 00:04:31.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.576 EAL: request: mp_malloc_sync 00:04:31.576 EAL: No shared files mode enabled, IPC is disabled 00:04:31.576 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.576 EAL: No shared files mode enabled, IPC is disabled 00:04:31.576 EAL: No shared files mode enabled, IPC is disabled 00:04:31.576 EAL: No shared files mode enabled, IPC is disabled 00:04:31.576 00:04:31.576 real 0m10.439s 00:04:31.576 user 0m8.829s 00:04:31.576 sys 0m1.426s 00:04:31.576 ************************************ 00:04:31.576 END TEST env_vtophys 00:04:31.576 ************************************ 00:04:31.576 18:01:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.576 18:01:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.576 18:01:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.576 18:01:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.576 18:01:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.576 18:01:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.576 ************************************ 00:04:31.576 START TEST env_pci 00:04:31.576 ************************************ 00:04:31.576 18:01:43 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.576 00:04:31.576 00:04:31.576 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.576 http://cunit.sourceforge.net/ 00:04:31.576 00:04:31.576 00:04:31.576 Suite: pci 00:04:31.576 Test: pci_hook ...[2024-12-06 18:01:43.514773] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56981 has claimed it 00:04:31.576 EAL: Cannot find device (10000:00:01.0) 00:04:31.576 passed 00:04:31.576 00:04:31.576 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.576 suites 1 1 n/a 0 0 00:04:31.576 tests 1 1 1 0 0 00:04:31.576 asserts 25 25 25 0 n/a 00:04:31.576 00:04:31.576 Elapsed time = 0.008 seconds 00:04:31.576 EAL: Failed to attach device on primary process 00:04:31.576 00:04:31.576 real 0m0.095s 00:04:31.576 user 0m0.046s 00:04:31.576 sys 0m0.048s 00:04:31.576 18:01:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.576 18:01:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.576 ************************************ 00:04:31.576 END TEST env_pci 00:04:31.576 ************************************ 00:04:31.576 18:01:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.576 18:01:43 env -- env/env.sh@15 -- # uname 00:04:31.576 18:01:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.576 18:01:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.576 18:01:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.576 18:01:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:31.576 18:01:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.576 18:01:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.576 ************************************ 00:04:31.576 START TEST env_dpdk_post_init 00:04:31.576 ************************************ 00:04:31.576 18:01:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.576 EAL: Detected CPU lcores: 10 00:04:31.576 EAL: Detected NUMA nodes: 1 00:04:31.576 EAL: Detected shared linkage of DPDK 00:04:31.835 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.835 EAL: Selected IOVA mode 'PA' 00:04:31.835 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.835 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:31.835 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:31.835 Starting DPDK initialization... 00:04:31.835 Starting SPDK post initialization... 00:04:31.835 SPDK NVMe probe 00:04:31.835 Attaching to 0000:00:10.0 00:04:31.835 Attaching to 0000:00:11.0 00:04:31.835 Attached to 0000:00:10.0 00:04:31.835 Attached to 0000:00:11.0 00:04:31.835 Cleaning up... 00:04:31.835 00:04:31.835 real 0m0.314s 00:04:31.835 user 0m0.120s 00:04:31.835 sys 0m0.092s 00:04:31.835 18:01:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.835 18:01:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.835 ************************************ 00:04:31.835 END TEST env_dpdk_post_init 00:04:31.835 ************************************ 00:04:32.093 18:01:44 env -- env/env.sh@26 -- # uname 00:04:32.093 18:01:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.093 18:01:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.093 18:01:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.093 18:01:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.093 18:01:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.093 ************************************ 00:04:32.093 START TEST env_mem_callbacks 00:04:32.093 ************************************ 00:04:32.093 18:01:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.093 EAL: Detected CPU lcores: 10 00:04:32.093 EAL: Detected NUMA nodes: 1 00:04:32.093 EAL: Detected shared linkage of DPDK 00:04:32.093 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.093 EAL: Selected IOVA mode 'PA' 00:04:32.093 00:04:32.093 00:04:32.093 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.093 http://cunit.sourceforge.net/ 00:04:32.093 00:04:32.094 00:04:32.094 Suite: memory 00:04:32.094 Test: test ... 00:04:32.094 register 0x200000200000 2097152 00:04:32.094 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.094 malloc 3145728 00:04:32.094 register 0x200000400000 4194304 00:04:32.094 buf 0x2000004fffc0 len 3145728 PASSED 00:04:32.094 malloc 64 00:04:32.094 buf 0x2000004ffec0 len 64 PASSED 00:04:32.094 malloc 4194304 00:04:32.094 register 0x200000800000 6291456 00:04:32.094 buf 0x2000009fffc0 len 4194304 PASSED 00:04:32.094 free 0x2000004fffc0 3145728 00:04:32.094 free 0x2000004ffec0 64 00:04:32.094 unregister 0x200000400000 4194304 PASSED 00:04:32.094 free 0x2000009fffc0 4194304 00:04:32.094 unregister 0x200000800000 6291456 PASSED 00:04:32.490 malloc 8388608 00:04:32.490 register 0x200000400000 10485760 00:04:32.490 buf 0x2000005fffc0 len 8388608 PASSED 00:04:32.490 free 0x2000005fffc0 8388608 00:04:32.490 unregister 0x200000400000 10485760 PASSED 00:04:32.490 passed 00:04:32.490 00:04:32.490 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.490 suites 1 1 n/a 0 0 00:04:32.490 tests 1 1 1 0 0 00:04:32.490 asserts 15 15 15 0 n/a 00:04:32.490 00:04:32.490 Elapsed time = 0.096 seconds 00:04:32.490 00:04:32.490 real 0m0.305s 00:04:32.490 user 0m0.125s 00:04:32.490 sys 0m0.076s 00:04:32.490 18:01:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.490 18:01:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.490 ************************************ 00:04:32.490 END TEST env_mem_callbacks 00:04:32.490 ************************************ 00:04:32.490 ************************************ 00:04:32.490 END TEST env 00:04:32.490 ************************************ 00:04:32.490 00:04:32.490 real 0m12.065s 00:04:32.490 user 0m9.655s 00:04:32.490 sys 0m2.023s 00:04:32.490 18:01:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.490 18:01:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.490 18:01:44 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.490 18:01:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.490 18:01:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.490 18:01:44 -- common/autotest_common.sh@10 -- # set +x 00:04:32.490 ************************************ 00:04:32.490 START TEST rpc 00:04:32.490 ************************************ 00:04:32.490 18:01:44 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.490 * Looking for test storage... 00:04:32.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.490 18:01:44 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.490 18:01:44 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.490 18:01:44 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.750 18:01:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.750 18:01:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.750 18:01:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.750 18:01:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.750 18:01:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.750 18:01:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.750 18:01:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.750 18:01:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.750 18:01:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.750 18:01:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.750 18:01:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.750 18:01:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.750 18:01:44 rpc -- scripts/common.sh@345 -- # : 1 00:04:32.750 18:01:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.750 18:01:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.750 18:01:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.750 18:01:44 rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.750 18:01:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.750 18:01:44 rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.750 18:01:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.750 18:01:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.750 18:01:44 rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.750 18:01:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.750 18:01:44 rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.750 18:01:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.750 18:01:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.750 18:01:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.750 18:01:44 rpc -- scripts/common.sh@368 -- # return 0 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.750 --rc genhtml_branch_coverage=1 00:04:32.750 --rc genhtml_function_coverage=1 00:04:32.750 --rc genhtml_legend=1 00:04:32.750 --rc geninfo_all_blocks=1 00:04:32.750 --rc geninfo_unexecuted_blocks=1 00:04:32.750 00:04:32.750 ' 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.750 --rc genhtml_branch_coverage=1 00:04:32.750 --rc genhtml_function_coverage=1 00:04:32.750 --rc genhtml_legend=1 00:04:32.750 --rc geninfo_all_blocks=1 00:04:32.750 --rc geninfo_unexecuted_blocks=1 00:04:32.750 00:04:32.750 ' 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.750 --rc genhtml_branch_coverage=1 00:04:32.750 --rc genhtml_function_coverage=1 00:04:32.750 --rc genhtml_legend=1 00:04:32.750 --rc geninfo_all_blocks=1 00:04:32.750 --rc geninfo_unexecuted_blocks=1 00:04:32.750 00:04:32.750 ' 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.750 --rc genhtml_branch_coverage=1 00:04:32.750 --rc genhtml_function_coverage=1 00:04:32.750 --rc genhtml_legend=1 00:04:32.750 --rc geninfo_all_blocks=1 00:04:32.750 --rc geninfo_unexecuted_blocks=1 00:04:32.750 00:04:32.750 ' 00:04:32.750 18:01:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57113 00:04:32.750 18:01:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:32.750 18:01:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.750 18:01:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57113 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 57113 ']' 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.750 18:01:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.750 [2024-12-06 18:01:44.818208] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:04:32.750 [2024-12-06 18:01:44.818448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57113 ] 00:04:33.008 [2024-12-06 18:01:44.999907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.008 [2024-12-06 18:01:45.136411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.008 [2024-12-06 18:01:45.136594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57113' to capture a snapshot of events at runtime. 00:04:33.008 [2024-12-06 18:01:45.136644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:33.008 [2024-12-06 18:01:45.136684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:33.008 [2024-12-06 18:01:45.136710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57113 for offline analysis/debug. 00:04:33.008 [2024-12-06 18:01:45.138325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.382 18:01:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.382 18:01:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.382 18:01:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.382 18:01:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.382 18:01:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:34.382 18:01:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:34.382 18:01:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.382 18:01:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.382 18:01:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.382 ************************************ 00:04:34.382 START TEST rpc_integrity 00:04:34.382 ************************************ 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.382 { 00:04:34.382 "name": "Malloc0", 00:04:34.382 "aliases": [ 00:04:34.382 "3586e8de-4635-4185-972c-727f74683dd8" 00:04:34.382 ], 00:04:34.382 "product_name": "Malloc disk", 00:04:34.382 "block_size": 512, 00:04:34.382 "num_blocks": 16384, 00:04:34.382 "uuid": "3586e8de-4635-4185-972c-727f74683dd8", 00:04:34.382 "assigned_rate_limits": { 00:04:34.382 "rw_ios_per_sec": 0, 00:04:34.382 "rw_mbytes_per_sec": 0, 00:04:34.382 "r_mbytes_per_sec": 0, 00:04:34.382 "w_mbytes_per_sec": 0 00:04:34.382 }, 00:04:34.382 "claimed": false, 00:04:34.382 "zoned": false, 00:04:34.382 "supported_io_types": { 00:04:34.382 "read": true, 00:04:34.382 "write": true, 00:04:34.382 "unmap": true, 00:04:34.382 "flush": true, 00:04:34.382 "reset": true, 00:04:34.382 "nvme_admin": false, 00:04:34.382 "nvme_io": false, 00:04:34.382 "nvme_io_md": false, 00:04:34.382 "write_zeroes": true, 00:04:34.382 "zcopy": true, 00:04:34.382 "get_zone_info": false, 00:04:34.382 "zone_management": false, 00:04:34.382 "zone_append": false, 00:04:34.382 "compare": false, 00:04:34.382 "compare_and_write": false, 00:04:34.382 "abort": true, 00:04:34.382 "seek_hole": false, 00:04:34.382 "seek_data": false, 00:04:34.382 "copy": true, 00:04:34.382 "nvme_iov_md": false 00:04:34.382 }, 00:04:34.382 "memory_domains": [ 00:04:34.382 { 00:04:34.382 "dma_device_id": "system", 00:04:34.382 "dma_device_type": 1 00:04:34.382 }, 00:04:34.382 { 00:04:34.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.382 "dma_device_type": 2 00:04:34.382 } 00:04:34.382 ], 00:04:34.382 "driver_specific": {} 00:04:34.382 } 00:04:34.382 ]' 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.382 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.382 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.383 [2024-12-06 18:01:46.331044] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:34.383 [2024-12-06 18:01:46.331184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.383 [2024-12-06 18:01:46.331228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:34.383 [2024-12-06 18:01:46.331246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.383 [2024-12-06 18:01:46.334131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.383 [2024-12-06 18:01:46.334211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.383 Passthru0 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.383 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.383 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.383 { 00:04:34.383 "name": "Malloc0", 00:04:34.383 "aliases": [ 00:04:34.383 "3586e8de-4635-4185-972c-727f74683dd8" 00:04:34.383 ], 00:04:34.383 "product_name": "Malloc disk", 00:04:34.383 "block_size": 512, 00:04:34.383 "num_blocks": 16384, 00:04:34.383 "uuid": "3586e8de-4635-4185-972c-727f74683dd8", 00:04:34.383 "assigned_rate_limits": { 00:04:34.383 "rw_ios_per_sec": 0, 00:04:34.383 "rw_mbytes_per_sec": 0, 00:04:34.383 "r_mbytes_per_sec": 0, 00:04:34.383 "w_mbytes_per_sec": 0 00:04:34.383 }, 00:04:34.383 "claimed": true, 00:04:34.383 "claim_type": "exclusive_write", 00:04:34.383 "zoned": false, 00:04:34.383 "supported_io_types": { 00:04:34.383 "read": true, 00:04:34.383 "write": true, 00:04:34.383 "unmap": true, 00:04:34.383 "flush": true, 00:04:34.383 "reset": true, 00:04:34.383 "nvme_admin": false, 00:04:34.383 "nvme_io": false, 00:04:34.383 "nvme_io_md": false, 00:04:34.383 "write_zeroes": true, 00:04:34.383 "zcopy": true, 00:04:34.383 "get_zone_info": false, 00:04:34.383 "zone_management": false, 00:04:34.383 "zone_append": false, 00:04:34.383 "compare": false, 00:04:34.383 "compare_and_write": false, 00:04:34.383 "abort": true, 00:04:34.383 "seek_hole": false, 00:04:34.383 "seek_data": false, 00:04:34.383 "copy": true, 00:04:34.383 "nvme_iov_md": false 00:04:34.383 }, 00:04:34.383 "memory_domains": [ 00:04:34.383 { 00:04:34.383 "dma_device_id": "system", 00:04:34.383 "dma_device_type": 1 00:04:34.383 }, 00:04:34.383 { 00:04:34.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.383 "dma_device_type": 2 00:04:34.383 } 00:04:34.383 ], 00:04:34.383 "driver_specific": {} 00:04:34.383 }, 00:04:34.383 { 00:04:34.383 "name": "Passthru0", 00:04:34.383 "aliases": [ 00:04:34.383 "9a06bb97-d169-5b79-ab02-d875e2c0e37a" 00:04:34.383 ], 00:04:34.383 "product_name": "passthru", 00:04:34.383 "block_size": 512, 00:04:34.383 "num_blocks": 16384, 00:04:34.383 "uuid": "9a06bb97-d169-5b79-ab02-d875e2c0e37a", 00:04:34.383 "assigned_rate_limits": { 00:04:34.383 "rw_ios_per_sec": 0, 00:04:34.383 "rw_mbytes_per_sec": 0, 00:04:34.383 "r_mbytes_per_sec": 0, 00:04:34.383 "w_mbytes_per_sec": 0 00:04:34.383 }, 00:04:34.383 "claimed": false, 00:04:34.383 "zoned": false, 00:04:34.383 "supported_io_types": { 00:04:34.383 "read": true, 00:04:34.383 "write": true, 00:04:34.383 "unmap": true, 00:04:34.383 "flush": true, 00:04:34.383 "reset": true, 00:04:34.383 "nvme_admin": false, 00:04:34.383 "nvme_io": false, 00:04:34.383 "nvme_io_md": false, 00:04:34.383 "write_zeroes": true, 00:04:34.383 "zcopy": true, 00:04:34.383 "get_zone_info": false, 00:04:34.383 "zone_management": false, 00:04:34.383 "zone_append": false, 00:04:34.383 "compare": false, 00:04:34.383 "compare_and_write": false, 00:04:34.383 "abort": true, 00:04:34.383 "seek_hole": false, 00:04:34.383 "seek_data": false, 00:04:34.383 "copy": true, 00:04:34.383 "nvme_iov_md": false 00:04:34.383 }, 00:04:34.383 "memory_domains": [ 00:04:34.383 { 00:04:34.383 "dma_device_id": "system", 00:04:34.383 "dma_device_type": 1 00:04:34.383 }, 00:04:34.383 { 00:04:34.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.383 "dma_device_type": 2 00:04:34.383 } 00:04:34.383 ], 00:04:34.383 "driver_specific": { 00:04:34.383 "passthru": { 00:04:34.383 "name": "Passthru0", 00:04:34.383 "base_bdev_name": "Malloc0" 00:04:34.383 } 00:04:34.383 } 00:04:34.383 } 00:04:34.383 ]' 00:04:34.383 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.383 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.383 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.383 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.383 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.383 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.383 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.383 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.642 18:01:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.642 00:04:34.642 real 0m0.392s 00:04:34.642 user 0m0.206s 00:04:34.642 sys 0m0.054s 00:04:34.642 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.642 18:01:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.642 ************************************ 00:04:34.642 END TEST rpc_integrity 00:04:34.642 ************************************ 00:04:34.642 18:01:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.642 18:01:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.642 18:01:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.642 18:01:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.642 ************************************ 00:04:34.642 START TEST rpc_plugins 00:04:34.642 ************************************ 00:04:34.642 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:34.642 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.642 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.642 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.642 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.642 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.643 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.643 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.643 { 00:04:34.643 "name": "Malloc1", 00:04:34.643 "aliases": [ 00:04:34.643 "69afe68e-a3f4-4908-b0fb-63941f311c01" 00:04:34.643 ], 00:04:34.643 "product_name": "Malloc disk", 00:04:34.643 "block_size": 4096, 00:04:34.643 "num_blocks": 256, 00:04:34.643 "uuid": "69afe68e-a3f4-4908-b0fb-63941f311c01", 00:04:34.643 "assigned_rate_limits": { 00:04:34.643 "rw_ios_per_sec": 0, 00:04:34.643 "rw_mbytes_per_sec": 0, 00:04:34.643 "r_mbytes_per_sec": 0, 00:04:34.643 "w_mbytes_per_sec": 0 00:04:34.643 }, 00:04:34.643 "claimed": false, 00:04:34.643 "zoned": false, 00:04:34.643 "supported_io_types": { 00:04:34.643 "read": true, 00:04:34.643 "write": true, 00:04:34.643 "unmap": true, 00:04:34.643 "flush": true, 00:04:34.643 "reset": true, 00:04:34.643 "nvme_admin": false, 00:04:34.643 "nvme_io": false, 00:04:34.643 "nvme_io_md": false, 00:04:34.643 "write_zeroes": true, 00:04:34.643 "zcopy": true, 00:04:34.643 "get_zone_info": false, 00:04:34.643 "zone_management": false, 00:04:34.643 "zone_append": false, 00:04:34.643 "compare": false, 00:04:34.643 "compare_and_write": false, 00:04:34.643 "abort": true, 00:04:34.643 "seek_hole": false, 00:04:34.643 "seek_data": false, 00:04:34.643 "copy": true, 00:04:34.643 "nvme_iov_md": false 00:04:34.643 }, 00:04:34.643 "memory_domains": [ 00:04:34.643 { 00:04:34.643 "dma_device_id": "system", 00:04:34.643 "dma_device_type": 1 00:04:34.643 }, 00:04:34.643 { 00:04:34.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.643 "dma_device_type": 2 00:04:34.643 } 00:04:34.643 ], 00:04:34.643 "driver_specific": {} 00:04:34.643 } 00:04:34.643 ]' 00:04:34.643 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.643 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.643 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.643 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.643 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.643 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.643 ************************************ 00:04:34.643 END TEST rpc_plugins 00:04:34.643 ************************************ 00:04:34.643 18:01:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.643 00:04:34.643 real 0m0.176s 00:04:34.643 user 0m0.093s 00:04:34.643 sys 0m0.030s 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.643 18:01:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.901 18:01:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.901 18:01:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.901 18:01:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.901 18:01:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.901 ************************************ 00:04:34.901 START TEST rpc_trace_cmd_test 00:04:34.901 ************************************ 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.901 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57113", 00:04:34.901 "tpoint_group_mask": "0x8", 00:04:34.901 "iscsi_conn": { 00:04:34.901 "mask": "0x2", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "scsi": { 00:04:34.901 "mask": "0x4", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "bdev": { 00:04:34.901 "mask": "0x8", 00:04:34.901 "tpoint_mask": "0xffffffffffffffff" 00:04:34.901 }, 00:04:34.901 "nvmf_rdma": { 00:04:34.901 "mask": "0x10", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "nvmf_tcp": { 00:04:34.901 "mask": "0x20", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "ftl": { 00:04:34.901 "mask": "0x40", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "blobfs": { 00:04:34.901 "mask": "0x80", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "dsa": { 00:04:34.901 "mask": "0x200", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "thread": { 00:04:34.901 "mask": "0x400", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "nvme_pcie": { 00:04:34.901 "mask": "0x800", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "iaa": { 00:04:34.901 "mask": "0x1000", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "nvme_tcp": { 00:04:34.901 "mask": "0x2000", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "bdev_nvme": { 00:04:34.901 "mask": "0x4000", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "sock": { 00:04:34.901 "mask": "0x8000", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "blob": { 00:04:34.901 "mask": "0x10000", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "bdev_raid": { 00:04:34.901 "mask": "0x20000", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 }, 00:04:34.901 "scheduler": { 00:04:34.901 "mask": "0x40000", 00:04:34.901 "tpoint_mask": "0x0" 00:04:34.901 } 00:04:34.901 }' 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.901 18:01:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.901 18:01:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.901 18:01:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.159 18:01:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.160 18:01:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.160 ************************************ 00:04:35.160 END TEST rpc_trace_cmd_test 00:04:35.160 ************************************ 00:04:35.160 18:01:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.160 00:04:35.160 real 0m0.247s 00:04:35.160 user 0m0.194s 00:04:35.160 sys 0m0.043s 00:04:35.160 18:01:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.160 18:01:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.160 18:01:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:35.160 18:01:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:35.160 18:01:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:35.160 18:01:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.160 18:01:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.160 18:01:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.160 ************************************ 00:04:35.160 START TEST rpc_daemon_integrity 00:04:35.160 ************************************ 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.160 { 00:04:35.160 "name": "Malloc2", 00:04:35.160 "aliases": [ 00:04:35.160 "9d813b99-c0ac-4a0f-bd8c-edcd3a30c6ad" 00:04:35.160 ], 00:04:35.160 "product_name": "Malloc disk", 00:04:35.160 "block_size": 512, 00:04:35.160 "num_blocks": 16384, 00:04:35.160 "uuid": "9d813b99-c0ac-4a0f-bd8c-edcd3a30c6ad", 00:04:35.160 "assigned_rate_limits": { 00:04:35.160 "rw_ios_per_sec": 0, 00:04:35.160 "rw_mbytes_per_sec": 0, 00:04:35.160 "r_mbytes_per_sec": 0, 00:04:35.160 "w_mbytes_per_sec": 0 00:04:35.160 }, 00:04:35.160 "claimed": false, 00:04:35.160 "zoned": false, 00:04:35.160 "supported_io_types": { 00:04:35.160 "read": true, 00:04:35.160 "write": true, 00:04:35.160 "unmap": true, 00:04:35.160 "flush": true, 00:04:35.160 "reset": true, 00:04:35.160 "nvme_admin": false, 00:04:35.160 "nvme_io": false, 00:04:35.160 "nvme_io_md": false, 00:04:35.160 "write_zeroes": true, 00:04:35.160 "zcopy": true, 00:04:35.160 "get_zone_info": false, 00:04:35.160 "zone_management": false, 00:04:35.160 "zone_append": false, 00:04:35.160 "compare": false, 00:04:35.160 "compare_and_write": false, 00:04:35.160 "abort": true, 00:04:35.160 "seek_hole": false, 00:04:35.160 "seek_data": false, 00:04:35.160 "copy": true, 00:04:35.160 "nvme_iov_md": false 00:04:35.160 }, 00:04:35.160 "memory_domains": [ 00:04:35.160 { 00:04:35.160 "dma_device_id": "system", 00:04:35.160 "dma_device_type": 1 00:04:35.160 }, 00:04:35.160 { 00:04:35.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.160 "dma_device_type": 2 00:04:35.160 } 00:04:35.160 ], 00:04:35.160 "driver_specific": {} 00:04:35.160 } 00:04:35.160 ]' 00:04:35.160 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.419 [2024-12-06 18:01:47.374184] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:35.419 [2024-12-06 18:01:47.374401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.419 [2024-12-06 18:01:47.374451] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:35.419 [2024-12-06 18:01:47.374468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.419 [2024-12-06 18:01:47.377285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.419 [2024-12-06 18:01:47.377362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.419 Passthru0 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.419 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.419 { 00:04:35.419 "name": "Malloc2", 00:04:35.419 "aliases": [ 00:04:35.419 "9d813b99-c0ac-4a0f-bd8c-edcd3a30c6ad" 00:04:35.419 ], 00:04:35.419 "product_name": "Malloc disk", 00:04:35.419 "block_size": 512, 00:04:35.419 "num_blocks": 16384, 00:04:35.419 "uuid": "9d813b99-c0ac-4a0f-bd8c-edcd3a30c6ad", 00:04:35.419 "assigned_rate_limits": { 00:04:35.419 "rw_ios_per_sec": 0, 00:04:35.419 "rw_mbytes_per_sec": 0, 00:04:35.419 "r_mbytes_per_sec": 0, 00:04:35.419 "w_mbytes_per_sec": 0 00:04:35.419 }, 00:04:35.419 "claimed": true, 00:04:35.419 "claim_type": "exclusive_write", 00:04:35.419 "zoned": false, 00:04:35.419 "supported_io_types": { 00:04:35.419 "read": true, 00:04:35.419 "write": true, 00:04:35.419 "unmap": true, 00:04:35.419 "flush": true, 00:04:35.419 "reset": true, 00:04:35.419 "nvme_admin": false, 00:04:35.419 "nvme_io": false, 00:04:35.419 "nvme_io_md": false, 00:04:35.419 "write_zeroes": true, 00:04:35.419 "zcopy": true, 00:04:35.419 "get_zone_info": false, 00:04:35.419 "zone_management": false, 00:04:35.419 "zone_append": false, 00:04:35.419 "compare": false, 00:04:35.419 "compare_and_write": false, 00:04:35.419 "abort": true, 00:04:35.419 "seek_hole": false, 00:04:35.419 "seek_data": false, 00:04:35.419 "copy": true, 00:04:35.419 "nvme_iov_md": false 00:04:35.419 }, 00:04:35.419 "memory_domains": [ 00:04:35.419 { 00:04:35.419 "dma_device_id": "system", 00:04:35.419 "dma_device_type": 1 00:04:35.419 }, 00:04:35.419 { 00:04:35.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.419 "dma_device_type": 2 00:04:35.419 } 00:04:35.419 ], 00:04:35.419 "driver_specific": {} 00:04:35.419 }, 00:04:35.419 { 00:04:35.419 "name": "Passthru0", 00:04:35.419 "aliases": [ 00:04:35.419 "0972a857-b3d9-5656-8bbc-15deec45d563" 00:04:35.419 ], 00:04:35.419 "product_name": "passthru", 00:04:35.419 "block_size": 512, 00:04:35.419 "num_blocks": 16384, 00:04:35.419 "uuid": "0972a857-b3d9-5656-8bbc-15deec45d563", 00:04:35.419 "assigned_rate_limits": { 00:04:35.419 "rw_ios_per_sec": 0, 00:04:35.419 "rw_mbytes_per_sec": 0, 00:04:35.419 "r_mbytes_per_sec": 0, 00:04:35.419 "w_mbytes_per_sec": 0 00:04:35.419 }, 00:04:35.419 "claimed": false, 00:04:35.419 "zoned": false, 00:04:35.419 "supported_io_types": { 00:04:35.419 "read": true, 00:04:35.419 "write": true, 00:04:35.419 "unmap": true, 00:04:35.419 "flush": true, 00:04:35.419 "reset": true, 00:04:35.419 "nvme_admin": false, 00:04:35.419 "nvme_io": false, 00:04:35.419 "nvme_io_md": false, 00:04:35.419 "write_zeroes": true, 00:04:35.419 "zcopy": true, 00:04:35.419 "get_zone_info": false, 00:04:35.419 "zone_management": false, 00:04:35.419 "zone_append": false, 00:04:35.419 "compare": false, 00:04:35.419 "compare_and_write": false, 00:04:35.419 "abort": true, 00:04:35.419 "seek_hole": false, 00:04:35.419 "seek_data": false, 00:04:35.419 "copy": true, 00:04:35.419 "nvme_iov_md": false 00:04:35.419 }, 00:04:35.419 "memory_domains": [ 00:04:35.419 { 00:04:35.419 "dma_device_id": "system", 00:04:35.419 "dma_device_type": 1 00:04:35.419 }, 00:04:35.419 { 00:04:35.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.419 "dma_device_type": 2 00:04:35.419 } 00:04:35.419 ], 00:04:35.419 "driver_specific": { 00:04:35.419 "passthru": { 00:04:35.419 "name": "Passthru0", 00:04:35.419 "base_bdev_name": "Malloc2" 00:04:35.419 } 00:04:35.419 } 00:04:35.419 } 00:04:35.420 ]' 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.420 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.677 18:01:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.677 00:04:35.677 real 0m0.392s 00:04:35.677 user 0m0.209s 00:04:35.677 sys 0m0.065s 00:04:35.677 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.677 18:01:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.677 ************************************ 00:04:35.677 END TEST rpc_daemon_integrity 00:04:35.677 ************************************ 00:04:35.677 18:01:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.677 18:01:47 rpc -- rpc/rpc.sh@84 -- # killprocess 57113 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 57113 ']' 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@958 -- # kill -0 57113 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57113 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57113' 00:04:35.677 killing process with pid 57113 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@973 -- # kill 57113 00:04:35.677 18:01:47 rpc -- common/autotest_common.sh@978 -- # wait 57113 00:04:38.963 00:04:38.963 real 0m5.939s 00:04:38.963 user 0m6.556s 00:04:38.963 sys 0m1.044s 00:04:38.963 ************************************ 00:04:38.963 END TEST rpc 00:04:38.963 ************************************ 00:04:38.963 18:01:50 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.963 18:01:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.963 18:01:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:38.963 18:01:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.963 18:01:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.963 18:01:50 -- common/autotest_common.sh@10 -- # set +x 00:04:38.963 ************************************ 00:04:38.963 START TEST skip_rpc 00:04:38.963 ************************************ 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:38.963 * Looking for test storage... 00:04:38.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.963 18:01:50 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.963 --rc genhtml_branch_coverage=1 00:04:38.963 --rc genhtml_function_coverage=1 00:04:38.963 --rc genhtml_legend=1 00:04:38.963 --rc geninfo_all_blocks=1 00:04:38.963 --rc geninfo_unexecuted_blocks=1 00:04:38.963 00:04:38.963 ' 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.963 --rc genhtml_branch_coverage=1 00:04:38.963 --rc genhtml_function_coverage=1 00:04:38.963 --rc genhtml_legend=1 00:04:38.963 --rc geninfo_all_blocks=1 00:04:38.963 --rc geninfo_unexecuted_blocks=1 00:04:38.963 00:04:38.963 ' 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.963 --rc genhtml_branch_coverage=1 00:04:38.963 --rc genhtml_function_coverage=1 00:04:38.963 --rc genhtml_legend=1 00:04:38.963 --rc geninfo_all_blocks=1 00:04:38.963 --rc geninfo_unexecuted_blocks=1 00:04:38.963 00:04:38.963 ' 00:04:38.963 18:01:50 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.963 --rc genhtml_branch_coverage=1 00:04:38.963 --rc genhtml_function_coverage=1 00:04:38.963 --rc genhtml_legend=1 00:04:38.963 --rc geninfo_all_blocks=1 00:04:38.964 --rc geninfo_unexecuted_blocks=1 00:04:38.964 00:04:38.964 ' 00:04:38.964 18:01:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:38.964 18:01:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.964 18:01:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:38.964 18:01:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.964 18:01:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.964 18:01:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.964 ************************************ 00:04:38.964 START TEST skip_rpc 00:04:38.964 ************************************ 00:04:38.964 18:01:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:38.964 18:01:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57343 00:04:38.964 18:01:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:38.964 18:01:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.964 18:01:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:38.964 [2024-12-06 18:01:50.839293] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:04:38.964 [2024-12-06 18:01:50.839448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57343 ] 00:04:38.964 [2024-12-06 18:01:51.018396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.222 [2024-12-06 18:01:51.145482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57343 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57343 ']' 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57343 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57343 00:04:44.494 killing process with pid 57343 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57343' 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57343 00:04:44.494 18:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57343 00:04:46.413 00:04:46.413 real 0m7.698s 00:04:46.413 user 0m7.187s 00:04:46.413 sys 0m0.426s 00:04:46.413 18:01:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.413 ************************************ 00:04:46.413 END TEST skip_rpc 00:04:46.413 ************************************ 00:04:46.413 18:01:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.413 18:01:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.413 18:01:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.413 18:01:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.413 18:01:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.413 ************************************ 00:04:46.413 START TEST skip_rpc_with_json 00:04:46.413 ************************************ 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57458 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57458 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57458 ']' 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.413 18:01:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.672 [2024-12-06 18:01:58.602899] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:04:46.672 [2024-12-06 18:01:58.603698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57458 ] 00:04:46.672 [2024-12-06 18:01:58.780125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.931 [2024-12-06 18:01:58.908056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.867 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.867 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:47.867 18:01:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.867 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.867 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.867 [2024-12-06 18:01:59.862178] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.867 request: 00:04:47.867 { 00:04:47.867 "trtype": "tcp", 00:04:47.867 "method": "nvmf_get_transports", 00:04:47.867 "req_id": 1 00:04:47.867 } 00:04:47.868 Got JSON-RPC error response 00:04:47.868 response: 00:04:47.868 { 00:04:47.868 "code": -19, 00:04:47.868 "message": "No such device" 00:04:47.868 } 00:04:47.868 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.868 18:01:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.868 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.868 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.868 [2024-12-06 18:01:59.874322] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.868 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.868 18:01:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.868 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.868 18:01:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.127 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.127 18:02:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.127 { 00:04:48.127 "subsystems": [ 00:04:48.127 { 00:04:48.127 "subsystem": "fsdev", 00:04:48.127 "config": [ 00:04:48.127 { 00:04:48.127 "method": "fsdev_set_opts", 00:04:48.127 "params": { 00:04:48.127 "fsdev_io_pool_size": 65535, 00:04:48.127 "fsdev_io_cache_size": 256 00:04:48.127 } 00:04:48.127 } 00:04:48.127 ] 00:04:48.127 }, 00:04:48.127 { 00:04:48.127 "subsystem": "keyring", 00:04:48.127 "config": [] 00:04:48.127 }, 00:04:48.127 { 00:04:48.127 "subsystem": "iobuf", 00:04:48.127 "config": [ 00:04:48.127 { 00:04:48.127 "method": "iobuf_set_options", 00:04:48.127 "params": { 00:04:48.127 "small_pool_count": 8192, 00:04:48.127 "large_pool_count": 1024, 00:04:48.127 "small_bufsize": 8192, 00:04:48.127 "large_bufsize": 135168, 00:04:48.127 "enable_numa": false 00:04:48.127 } 00:04:48.127 } 00:04:48.127 ] 00:04:48.127 }, 00:04:48.127 { 00:04:48.127 "subsystem": "sock", 00:04:48.127 "config": [ 00:04:48.127 { 00:04:48.127 "method": "sock_set_default_impl", 00:04:48.127 "params": { 00:04:48.127 "impl_name": "posix" 00:04:48.127 } 00:04:48.127 }, 00:04:48.127 { 00:04:48.127 "method": "sock_impl_set_options", 00:04:48.127 "params": { 00:04:48.127 "impl_name": "ssl", 00:04:48.127 "recv_buf_size": 4096, 00:04:48.127 "send_buf_size": 4096, 00:04:48.127 "enable_recv_pipe": true, 00:04:48.127 "enable_quickack": false, 00:04:48.127 "enable_placement_id": 0, 00:04:48.127 "enable_zerocopy_send_server": true, 00:04:48.127 "enable_zerocopy_send_client": false, 00:04:48.127 "zerocopy_threshold": 0, 00:04:48.127 "tls_version": 0, 00:04:48.127 "enable_ktls": false 00:04:48.127 } 00:04:48.127 }, 00:04:48.127 { 00:04:48.127 "method": "sock_impl_set_options", 00:04:48.127 "params": { 00:04:48.127 "impl_name": "posix", 00:04:48.127 "recv_buf_size": 2097152, 00:04:48.127 "send_buf_size": 2097152, 00:04:48.127 "enable_recv_pipe": true, 00:04:48.127 "enable_quickack": false, 00:04:48.127 "enable_placement_id": 0, 00:04:48.127 "enable_zerocopy_send_server": true, 00:04:48.127 "enable_zerocopy_send_client": false, 00:04:48.127 "zerocopy_threshold": 0, 00:04:48.127 "tls_version": 0, 00:04:48.127 "enable_ktls": false 00:04:48.127 } 00:04:48.127 } 00:04:48.127 ] 00:04:48.127 }, 00:04:48.127 { 00:04:48.127 "subsystem": "vmd", 00:04:48.127 "config": [] 00:04:48.127 }, 00:04:48.127 { 00:04:48.127 "subsystem": "accel", 00:04:48.127 "config": [ 00:04:48.127 { 00:04:48.127 "method": "accel_set_options", 00:04:48.127 "params": { 00:04:48.127 "small_cache_size": 128, 00:04:48.127 "large_cache_size": 16, 00:04:48.127 "task_count": 2048, 00:04:48.127 "sequence_count": 2048, 00:04:48.127 "buf_count": 2048 00:04:48.127 } 00:04:48.127 } 00:04:48.127 ] 00:04:48.127 }, 00:04:48.127 { 00:04:48.127 "subsystem": "bdev", 00:04:48.127 "config": [ 00:04:48.127 { 00:04:48.127 "method": "bdev_set_options", 00:04:48.127 "params": { 00:04:48.127 "bdev_io_pool_size": 65535, 00:04:48.127 "bdev_io_cache_size": 256, 00:04:48.127 "bdev_auto_examine": true, 00:04:48.127 "iobuf_small_cache_size": 128, 00:04:48.127 "iobuf_large_cache_size": 16 00:04:48.128 } 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "method": "bdev_raid_set_options", 00:04:48.128 "params": { 00:04:48.128 "process_window_size_kb": 1024, 00:04:48.128 "process_max_bandwidth_mb_sec": 0 00:04:48.128 } 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "method": "bdev_iscsi_set_options", 00:04:48.128 "params": { 00:04:48.128 "timeout_sec": 30 00:04:48.128 } 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "method": "bdev_nvme_set_options", 00:04:48.128 "params": { 00:04:48.128 "action_on_timeout": "none", 00:04:48.128 "timeout_us": 0, 00:04:48.128 "timeout_admin_us": 0, 00:04:48.128 "keep_alive_timeout_ms": 10000, 00:04:48.128 "arbitration_burst": 0, 00:04:48.128 "low_priority_weight": 0, 00:04:48.128 "medium_priority_weight": 0, 00:04:48.128 "high_priority_weight": 0, 00:04:48.128 "nvme_adminq_poll_period_us": 10000, 00:04:48.128 "nvme_ioq_poll_period_us": 0, 00:04:48.128 "io_queue_requests": 0, 00:04:48.128 "delay_cmd_submit": true, 00:04:48.128 "transport_retry_count": 4, 00:04:48.128 "bdev_retry_count": 3, 00:04:48.128 "transport_ack_timeout": 0, 00:04:48.128 "ctrlr_loss_timeout_sec": 0, 00:04:48.128 "reconnect_delay_sec": 0, 00:04:48.128 "fast_io_fail_timeout_sec": 0, 00:04:48.128 "disable_auto_failback": false, 00:04:48.128 "generate_uuids": false, 00:04:48.128 "transport_tos": 0, 00:04:48.128 "nvme_error_stat": false, 00:04:48.128 "rdma_srq_size": 0, 00:04:48.128 "io_path_stat": false, 00:04:48.128 "allow_accel_sequence": false, 00:04:48.128 "rdma_max_cq_size": 0, 00:04:48.128 "rdma_cm_event_timeout_ms": 0, 00:04:48.128 "dhchap_digests": [ 00:04:48.128 "sha256", 00:04:48.128 "sha384", 00:04:48.128 "sha512" 00:04:48.128 ], 00:04:48.128 "dhchap_dhgroups": [ 00:04:48.128 "null", 00:04:48.128 "ffdhe2048", 00:04:48.128 "ffdhe3072", 00:04:48.128 "ffdhe4096", 00:04:48.128 "ffdhe6144", 00:04:48.128 "ffdhe8192" 00:04:48.128 ] 00:04:48.128 } 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "method": "bdev_nvme_set_hotplug", 00:04:48.128 "params": { 00:04:48.128 "period_us": 100000, 00:04:48.128 "enable": false 00:04:48.128 } 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "method": "bdev_wait_for_examine" 00:04:48.128 } 00:04:48.128 ] 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "subsystem": "scsi", 00:04:48.128 "config": null 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "subsystem": "scheduler", 00:04:48.128 "config": [ 00:04:48.128 { 00:04:48.128 "method": "framework_set_scheduler", 00:04:48.128 "params": { 00:04:48.128 "name": "static" 00:04:48.128 } 00:04:48.128 } 00:04:48.128 ] 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "subsystem": "vhost_scsi", 00:04:48.128 "config": [] 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "subsystem": "vhost_blk", 00:04:48.128 "config": [] 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "subsystem": "ublk", 00:04:48.128 "config": [] 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "subsystem": "nbd", 00:04:48.128 "config": [] 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "subsystem": "nvmf", 00:04:48.128 "config": [ 00:04:48.128 { 00:04:48.128 "method": "nvmf_set_config", 00:04:48.128 "params": { 00:04:48.128 "discovery_filter": "match_any", 00:04:48.128 "admin_cmd_passthru": { 00:04:48.128 "identify_ctrlr": false 00:04:48.128 }, 00:04:48.128 "dhchap_digests": [ 00:04:48.128 "sha256", 00:04:48.128 "sha384", 00:04:48.128 "sha512" 00:04:48.128 ], 00:04:48.128 "dhchap_dhgroups": [ 00:04:48.128 "null", 00:04:48.128 "ffdhe2048", 00:04:48.128 "ffdhe3072", 00:04:48.128 "ffdhe4096", 00:04:48.128 "ffdhe6144", 00:04:48.128 "ffdhe8192" 00:04:48.128 ] 00:04:48.128 } 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "method": "nvmf_set_max_subsystems", 00:04:48.128 "params": { 00:04:48.128 "max_subsystems": 1024 00:04:48.128 } 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "method": "nvmf_set_crdt", 00:04:48.128 "params": { 00:04:48.128 "crdt1": 0, 00:04:48.128 "crdt2": 0, 00:04:48.128 "crdt3": 0 00:04:48.128 } 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "method": "nvmf_create_transport", 00:04:48.128 "params": { 00:04:48.128 "trtype": "TCP", 00:04:48.128 "max_queue_depth": 128, 00:04:48.128 "max_io_qpairs_per_ctrlr": 127, 00:04:48.128 "in_capsule_data_size": 4096, 00:04:48.128 "max_io_size": 131072, 00:04:48.128 "io_unit_size": 131072, 00:04:48.128 "max_aq_depth": 128, 00:04:48.128 "num_shared_buffers": 511, 00:04:48.128 "buf_cache_size": 4294967295, 00:04:48.128 "dif_insert_or_strip": false, 00:04:48.128 "zcopy": false, 00:04:48.128 "c2h_success": true, 00:04:48.128 "sock_priority": 0, 00:04:48.128 "abort_timeout_sec": 1, 00:04:48.128 "ack_timeout": 0, 00:04:48.128 "data_wr_pool_size": 0 00:04:48.128 } 00:04:48.128 } 00:04:48.128 ] 00:04:48.128 }, 00:04:48.128 { 00:04:48.128 "subsystem": "iscsi", 00:04:48.128 "config": [ 00:04:48.128 { 00:04:48.128 "method": "iscsi_set_options", 00:04:48.128 "params": { 00:04:48.128 "node_base": "iqn.2016-06.io.spdk", 00:04:48.128 "max_sessions": 128, 00:04:48.128 "max_connections_per_session": 2, 00:04:48.128 "max_queue_depth": 64, 00:04:48.128 "default_time2wait": 2, 00:04:48.128 "default_time2retain": 20, 00:04:48.128 "first_burst_length": 8192, 00:04:48.128 "immediate_data": true, 00:04:48.128 "allow_duplicated_isid": false, 00:04:48.128 "error_recovery_level": 0, 00:04:48.128 "nop_timeout": 60, 00:04:48.128 "nop_in_interval": 30, 00:04:48.128 "disable_chap": false, 00:04:48.128 "require_chap": false, 00:04:48.128 "mutual_chap": false, 00:04:48.128 "chap_group": 0, 00:04:48.128 "max_large_datain_per_connection": 64, 00:04:48.128 "max_r2t_per_connection": 4, 00:04:48.128 "pdu_pool_size": 36864, 00:04:48.128 "immediate_data_pool_size": 16384, 00:04:48.128 "data_out_pool_size": 2048 00:04:48.128 } 00:04:48.128 } 00:04:48.128 ] 00:04:48.128 } 00:04:48.128 ] 00:04:48.128 } 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57458 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57458 ']' 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57458 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57458 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57458' 00:04:48.128 killing process with pid 57458 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57458 00:04:48.128 18:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57458 00:04:50.677 18:02:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57514 00:04:50.677 18:02:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.677 18:02:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57514 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57514 ']' 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57514 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57514 00:04:55.976 killing process with pid 57514 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57514' 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57514 00:04:55.976 18:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57514 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:58.515 ************************************ 00:04:58.515 END TEST skip_rpc_with_json 00:04:58.515 ************************************ 00:04:58.515 00:04:58.515 real 0m11.896s 00:04:58.515 user 0m11.389s 00:04:58.515 sys 0m0.894s 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.515 18:02:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:58.515 18:02:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.515 18:02:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.515 18:02:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.515 ************************************ 00:04:58.515 START TEST skip_rpc_with_delay 00:04:58.515 ************************************ 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.515 [2024-12-06 18:02:10.582558] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.515 00:04:58.515 real 0m0.191s 00:04:58.515 user 0m0.102s 00:04:58.515 sys 0m0.086s 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.515 ************************************ 00:04:58.515 END TEST skip_rpc_with_delay 00:04:58.515 ************************************ 00:04:58.515 18:02:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:58.776 18:02:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:58.776 18:02:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:58.776 18:02:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:58.776 18:02:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.776 18:02:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.776 18:02:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.776 ************************************ 00:04:58.776 START TEST exit_on_failed_rpc_init 00:04:58.776 ************************************ 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57648 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57648 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57648 ']' 00:04:58.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.776 18:02:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.776 [2024-12-06 18:02:10.841153] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:04:58.776 [2024-12-06 18:02:10.841299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57648 ] 00:04:59.035 [2024-12-06 18:02:11.022185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.035 [2024-12-06 18:02:11.160504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.074 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.334 [2024-12-06 18:02:12.256407] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:00.334 [2024-12-06 18:02:12.256674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57671 ] 00:05:00.334 [2024-12-06 18:02:12.438176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.594 [2024-12-06 18:02:12.578428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.594 [2024-12-06 18:02:12.578634] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:00.594 [2024-12-06 18:02:12.578962] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:00.594 [2024-12-06 18:02:12.579019] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.854 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:00.854 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.854 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:00.854 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:00.854 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:00.854 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.854 18:02:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.854 18:02:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57648 00:05:00.854 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57648 ']' 00:05:00.855 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57648 00:05:00.855 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:00.855 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.855 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57648 00:05:00.855 killing process with pid 57648 00:05:00.855 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.855 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.855 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57648' 00:05:00.855 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57648 00:05:00.855 18:02:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57648 00:05:04.141 ************************************ 00:05:04.141 END TEST exit_on_failed_rpc_init 00:05:04.141 ************************************ 00:05:04.141 00:05:04.141 real 0m4.899s 00:05:04.141 user 0m5.300s 00:05:04.141 sys 0m0.641s 00:05:04.141 18:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.141 18:02:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.141 18:02:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.141 ************************************ 00:05:04.141 END TEST skip_rpc 00:05:04.141 ************************************ 00:05:04.141 00:05:04.141 real 0m25.221s 00:05:04.141 user 0m24.215s 00:05:04.141 sys 0m2.366s 00:05:04.141 18:02:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.141 18:02:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.141 18:02:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:04.141 18:02:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.141 18:02:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.142 18:02:15 -- common/autotest_common.sh@10 -- # set +x 00:05:04.142 ************************************ 00:05:04.142 START TEST rpc_client 00:05:04.142 ************************************ 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:04.142 * Looking for test storage... 00:05:04.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.142 18:02:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 00:05:04.142 ' 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 00:05:04.142 ' 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 00:05:04.142 ' 00:05:04.142 18:02:15 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 00:05:04.142 ' 00:05:04.142 18:02:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:04.142 OK 00:05:04.142 18:02:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:04.142 00:05:04.142 real 0m0.305s 00:05:04.142 user 0m0.166s 00:05:04.142 sys 0m0.155s 00:05:04.142 18:02:16 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.142 18:02:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:04.142 ************************************ 00:05:04.142 END TEST rpc_client 00:05:04.142 ************************************ 00:05:04.142 18:02:16 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:04.142 18:02:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.142 18:02:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.142 18:02:16 -- common/autotest_common.sh@10 -- # set +x 00:05:04.142 ************************************ 00:05:04.142 START TEST json_config 00:05:04.142 ************************************ 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.142 18:02:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.142 18:02:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.142 18:02:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.142 18:02:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.142 18:02:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.142 18:02:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.142 18:02:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.142 18:02:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.142 18:02:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.142 18:02:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.142 18:02:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.142 18:02:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:04.142 18:02:16 json_config -- scripts/common.sh@345 -- # : 1 00:05:04.142 18:02:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.142 18:02:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.142 18:02:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:04.142 18:02:16 json_config -- scripts/common.sh@353 -- # local d=1 00:05:04.142 18:02:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.142 18:02:16 json_config -- scripts/common.sh@355 -- # echo 1 00:05:04.142 18:02:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.142 18:02:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:04.142 18:02:16 json_config -- scripts/common.sh@353 -- # local d=2 00:05:04.142 18:02:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.142 18:02:16 json_config -- scripts/common.sh@355 -- # echo 2 00:05:04.142 18:02:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.142 18:02:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.142 18:02:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.142 18:02:16 json_config -- scripts/common.sh@368 -- # return 0 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 00:05:04.142 ' 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 00:05:04.142 ' 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 00:05:04.142 ' 00:05:04.142 18:02:16 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.142 --rc genhtml_branch_coverage=1 00:05:04.142 --rc genhtml_function_coverage=1 00:05:04.142 --rc genhtml_legend=1 00:05:04.142 --rc geninfo_all_blocks=1 00:05:04.142 --rc geninfo_unexecuted_blocks=1 00:05:04.142 00:05:04.142 ' 00:05:04.142 18:02:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.142 18:02:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65089fda-b69e-46f5-994f-34d45af0c95c 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=65089fda-b69e-46f5-994f-34d45af0c95c 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.402 18:02:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.402 18:02:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.402 18:02:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.402 18:02:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.402 18:02:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.402 18:02:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.402 18:02:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.402 18:02:16 json_config -- paths/export.sh@5 -- # export PATH 00:05:04.402 18:02:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@51 -- # : 0 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.402 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.402 18:02:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.402 18:02:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:04.402 18:02:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:04.402 18:02:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:04.402 18:02:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:04.402 18:02:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:04.402 18:02:16 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:04.402 WARNING: No tests are enabled so not running JSON configuration tests 00:05:04.402 18:02:16 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:04.402 00:05:04.402 real 0m0.240s 00:05:04.402 user 0m0.144s 00:05:04.402 sys 0m0.101s 00:05:04.402 18:02:16 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.402 18:02:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.402 ************************************ 00:05:04.402 END TEST json_config 00:05:04.402 ************************************ 00:05:04.403 18:02:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:04.403 18:02:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.403 18:02:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.403 18:02:16 -- common/autotest_common.sh@10 -- # set +x 00:05:04.403 ************************************ 00:05:04.403 START TEST json_config_extra_key 00:05:04.403 ************************************ 00:05:04.403 18:02:16 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:04.403 18:02:16 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.403 18:02:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.403 18:02:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.663 18:02:16 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.663 18:02:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.663 18:02:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.663 18:02:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.663 18:02:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.663 18:02:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.663 18:02:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:04.664 18:02:16 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.664 18:02:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.664 --rc genhtml_branch_coverage=1 00:05:04.664 --rc genhtml_function_coverage=1 00:05:04.664 --rc genhtml_legend=1 00:05:04.664 --rc geninfo_all_blocks=1 00:05:04.664 --rc geninfo_unexecuted_blocks=1 00:05:04.664 00:05:04.664 ' 00:05:04.664 18:02:16 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.664 --rc genhtml_branch_coverage=1 00:05:04.664 --rc genhtml_function_coverage=1 00:05:04.664 --rc genhtml_legend=1 00:05:04.664 --rc geninfo_all_blocks=1 00:05:04.664 --rc geninfo_unexecuted_blocks=1 00:05:04.664 00:05:04.664 ' 00:05:04.664 18:02:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.664 --rc genhtml_branch_coverage=1 00:05:04.664 --rc genhtml_function_coverage=1 00:05:04.664 --rc genhtml_legend=1 00:05:04.664 --rc geninfo_all_blocks=1 00:05:04.664 --rc geninfo_unexecuted_blocks=1 00:05:04.664 00:05:04.664 ' 00:05:04.664 18:02:16 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.664 --rc genhtml_branch_coverage=1 00:05:04.664 --rc genhtml_function_coverage=1 00:05:04.664 --rc genhtml_legend=1 00:05:04.664 --rc geninfo_all_blocks=1 00:05:04.664 --rc geninfo_unexecuted_blocks=1 00:05:04.664 00:05:04.664 ' 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:65089fda-b69e-46f5-994f-34d45af0c95c 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=65089fda-b69e-46f5-994f-34d45af0c95c 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.664 18:02:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.664 18:02:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.664 18:02:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.664 18:02:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.664 18:02:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:04.664 18:02:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.664 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.664 18:02:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:04.664 INFO: launching applications... 00:05:04.664 18:02:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57882 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.664 Waiting for target to run... 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57882 /var/tmp/spdk_tgt.sock 00:05:04.664 18:02:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57882 ']' 00:05:04.664 18:02:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:04.665 18:02:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.665 18:02:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.665 18:02:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.665 18:02:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.665 18:02:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:04.665 [2024-12-06 18:02:16.771245] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:04.665 [2024-12-06 18:02:16.771492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57882 ] 00:05:05.235 [2024-12-06 18:02:17.173466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.235 [2024-12-06 18:02:17.290026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.174 00:05:06.174 INFO: shutting down applications... 00:05:06.174 18:02:18 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.174 18:02:18 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:06.174 18:02:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:06.174 18:02:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:06.174 18:02:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:06.174 18:02:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:06.174 18:02:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.174 18:02:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57882 ]] 00:05:06.174 18:02:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57882 00:05:06.174 18:02:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.174 18:02:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.174 18:02:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:05:06.174 18:02:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.744 18:02:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.744 18:02:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.744 18:02:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:05:06.744 18:02:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.005 18:02:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.005 18:02:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.005 18:02:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:05:07.005 18:02:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.576 18:02:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.576 18:02:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.576 18:02:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:05:07.576 18:02:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.170 18:02:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.170 18:02:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.170 18:02:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:05:08.170 18:02:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.744 18:02:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.744 18:02:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.744 18:02:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:05:08.744 18:02:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.004 18:02:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.004 18:02:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.004 18:02:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:05:09.004 18:02:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.573 18:02:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.573 18:02:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.573 18:02:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:05:09.573 SPDK target shutdown done 00:05:09.573 Success 00:05:09.573 18:02:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.573 18:02:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:09.573 18:02:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.573 18:02:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.573 18:02:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:09.573 ************************************ 00:05:09.573 END TEST json_config_extra_key 00:05:09.573 ************************************ 00:05:09.573 00:05:09.573 real 0m5.224s 00:05:09.573 user 0m4.698s 00:05:09.573 sys 0m0.627s 00:05:09.573 18:02:21 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.573 18:02:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.573 18:02:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.573 18:02:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.573 18:02:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.573 18:02:21 -- common/autotest_common.sh@10 -- # set +x 00:05:09.573 ************************************ 00:05:09.573 START TEST alias_rpc 00:05:09.573 ************************************ 00:05:09.573 18:02:21 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.832 * Looking for test storage... 00:05:09.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:09.832 18:02:21 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.832 18:02:21 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.832 18:02:21 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.832 18:02:21 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.832 18:02:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:09.832 18:02:21 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.832 18:02:21 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.832 --rc genhtml_branch_coverage=1 00:05:09.832 --rc genhtml_function_coverage=1 00:05:09.832 --rc genhtml_legend=1 00:05:09.832 --rc geninfo_all_blocks=1 00:05:09.832 --rc geninfo_unexecuted_blocks=1 00:05:09.832 00:05:09.832 ' 00:05:09.832 18:02:21 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.832 --rc genhtml_branch_coverage=1 00:05:09.832 --rc genhtml_function_coverage=1 00:05:09.832 --rc genhtml_legend=1 00:05:09.832 --rc geninfo_all_blocks=1 00:05:09.832 --rc geninfo_unexecuted_blocks=1 00:05:09.832 00:05:09.832 ' 00:05:09.833 18:02:21 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.833 --rc genhtml_branch_coverage=1 00:05:09.833 --rc genhtml_function_coverage=1 00:05:09.833 --rc genhtml_legend=1 00:05:09.833 --rc geninfo_all_blocks=1 00:05:09.833 --rc geninfo_unexecuted_blocks=1 00:05:09.833 00:05:09.833 ' 00:05:09.833 18:02:21 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.833 --rc genhtml_branch_coverage=1 00:05:09.833 --rc genhtml_function_coverage=1 00:05:09.833 --rc genhtml_legend=1 00:05:09.833 --rc geninfo_all_blocks=1 00:05:09.833 --rc geninfo_unexecuted_blocks=1 00:05:09.833 00:05:09.833 ' 00:05:09.833 18:02:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:09.833 18:02:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.833 18:02:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58005 00:05:09.833 18:02:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58005 00:05:09.833 18:02:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58005 ']' 00:05:09.833 18:02:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.833 18:02:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.833 18:02:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.833 18:02:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.833 18:02:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.091 [2024-12-06 18:02:22.033099] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:10.091 [2024-12-06 18:02:22.033894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58005 ] 00:05:10.091 [2024-12-06 18:02:22.215150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.351 [2024-12-06 18:02:22.353798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.288 18:02:23 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.288 18:02:23 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.288 18:02:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:11.546 18:02:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58005 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58005 ']' 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58005 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58005 00:05:11.546 killing process with pid 58005 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58005' 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@973 -- # kill 58005 00:05:11.546 18:02:23 alias_rpc -- common/autotest_common.sh@978 -- # wait 58005 00:05:14.838 ************************************ 00:05:14.838 END TEST alias_rpc 00:05:14.838 ************************************ 00:05:14.838 00:05:14.838 real 0m4.864s 00:05:14.838 user 0m5.036s 00:05:14.838 sys 0m0.615s 00:05:14.838 18:02:26 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.838 18:02:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.838 18:02:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:14.838 18:02:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:14.838 18:02:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.838 18:02:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.838 18:02:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.838 ************************************ 00:05:14.838 START TEST spdkcli_tcp 00:05:14.838 ************************************ 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:14.838 * Looking for test storage... 00:05:14.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.838 18:02:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.838 --rc genhtml_branch_coverage=1 00:05:14.838 --rc genhtml_function_coverage=1 00:05:14.838 --rc genhtml_legend=1 00:05:14.838 --rc geninfo_all_blocks=1 00:05:14.838 --rc geninfo_unexecuted_blocks=1 00:05:14.838 00:05:14.838 ' 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.838 --rc genhtml_branch_coverage=1 00:05:14.838 --rc genhtml_function_coverage=1 00:05:14.838 --rc genhtml_legend=1 00:05:14.838 --rc geninfo_all_blocks=1 00:05:14.838 --rc geninfo_unexecuted_blocks=1 00:05:14.838 00:05:14.838 ' 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.838 --rc genhtml_branch_coverage=1 00:05:14.838 --rc genhtml_function_coverage=1 00:05:14.838 --rc genhtml_legend=1 00:05:14.838 --rc geninfo_all_blocks=1 00:05:14.838 --rc geninfo_unexecuted_blocks=1 00:05:14.838 00:05:14.838 ' 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.838 --rc genhtml_branch_coverage=1 00:05:14.838 --rc genhtml_function_coverage=1 00:05:14.838 --rc genhtml_legend=1 00:05:14.838 --rc geninfo_all_blocks=1 00:05:14.838 --rc geninfo_unexecuted_blocks=1 00:05:14.838 00:05:14.838 ' 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.838 18:02:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58117 00:05:14.838 18:02:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58117 00:05:14.839 18:02:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58117 ']' 00:05:14.839 18:02:26 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.839 18:02:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.839 18:02:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.839 18:02:26 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.839 18:02:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.839 [2024-12-06 18:02:26.928133] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:14.839 [2024-12-06 18:02:26.928386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58117 ] 00:05:15.096 [2024-12-06 18:02:27.110384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.096 [2024-12-06 18:02:27.251845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.096 [2024-12-06 18:02:27.251884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.483 18:02:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.483 18:02:28 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:16.483 18:02:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58140 00:05:16.483 18:02:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:16.483 18:02:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:16.483 [ 00:05:16.483 "bdev_malloc_delete", 00:05:16.483 "bdev_malloc_create", 00:05:16.483 "bdev_null_resize", 00:05:16.483 "bdev_null_delete", 00:05:16.483 "bdev_null_create", 00:05:16.483 "bdev_nvme_cuse_unregister", 00:05:16.483 "bdev_nvme_cuse_register", 00:05:16.483 "bdev_opal_new_user", 00:05:16.483 "bdev_opal_set_lock_state", 00:05:16.483 "bdev_opal_delete", 00:05:16.483 "bdev_opal_get_info", 00:05:16.483 "bdev_opal_create", 00:05:16.483 "bdev_nvme_opal_revert", 00:05:16.483 "bdev_nvme_opal_init", 00:05:16.483 "bdev_nvme_send_cmd", 00:05:16.483 "bdev_nvme_set_keys", 00:05:16.483 "bdev_nvme_get_path_iostat", 00:05:16.483 "bdev_nvme_get_mdns_discovery_info", 00:05:16.483 "bdev_nvme_stop_mdns_discovery", 00:05:16.483 "bdev_nvme_start_mdns_discovery", 00:05:16.483 "bdev_nvme_set_multipath_policy", 00:05:16.483 "bdev_nvme_set_preferred_path", 00:05:16.483 "bdev_nvme_get_io_paths", 00:05:16.483 "bdev_nvme_remove_error_injection", 00:05:16.483 "bdev_nvme_add_error_injection", 00:05:16.483 "bdev_nvme_get_discovery_info", 00:05:16.483 "bdev_nvme_stop_discovery", 00:05:16.483 "bdev_nvme_start_discovery", 00:05:16.483 "bdev_nvme_get_controller_health_info", 00:05:16.483 "bdev_nvme_disable_controller", 00:05:16.483 "bdev_nvme_enable_controller", 00:05:16.483 "bdev_nvme_reset_controller", 00:05:16.483 "bdev_nvme_get_transport_statistics", 00:05:16.483 "bdev_nvme_apply_firmware", 00:05:16.483 "bdev_nvme_detach_controller", 00:05:16.483 "bdev_nvme_get_controllers", 00:05:16.483 "bdev_nvme_attach_controller", 00:05:16.483 "bdev_nvme_set_hotplug", 00:05:16.483 "bdev_nvme_set_options", 00:05:16.483 "bdev_passthru_delete", 00:05:16.483 "bdev_passthru_create", 00:05:16.483 "bdev_lvol_set_parent_bdev", 00:05:16.483 "bdev_lvol_set_parent", 00:05:16.483 "bdev_lvol_check_shallow_copy", 00:05:16.483 "bdev_lvol_start_shallow_copy", 00:05:16.483 "bdev_lvol_grow_lvstore", 00:05:16.483 "bdev_lvol_get_lvols", 00:05:16.483 "bdev_lvol_get_lvstores", 00:05:16.483 "bdev_lvol_delete", 00:05:16.483 "bdev_lvol_set_read_only", 00:05:16.483 "bdev_lvol_resize", 00:05:16.483 "bdev_lvol_decouple_parent", 00:05:16.483 "bdev_lvol_inflate", 00:05:16.483 "bdev_lvol_rename", 00:05:16.483 "bdev_lvol_clone_bdev", 00:05:16.483 "bdev_lvol_clone", 00:05:16.483 "bdev_lvol_snapshot", 00:05:16.483 "bdev_lvol_create", 00:05:16.483 "bdev_lvol_delete_lvstore", 00:05:16.483 "bdev_lvol_rename_lvstore", 00:05:16.483 "bdev_lvol_create_lvstore", 00:05:16.483 "bdev_raid_set_options", 00:05:16.483 "bdev_raid_remove_base_bdev", 00:05:16.483 "bdev_raid_add_base_bdev", 00:05:16.483 "bdev_raid_delete", 00:05:16.483 "bdev_raid_create", 00:05:16.483 "bdev_raid_get_bdevs", 00:05:16.483 "bdev_error_inject_error", 00:05:16.483 "bdev_error_delete", 00:05:16.483 "bdev_error_create", 00:05:16.483 "bdev_split_delete", 00:05:16.483 "bdev_split_create", 00:05:16.483 "bdev_delay_delete", 00:05:16.483 "bdev_delay_create", 00:05:16.483 "bdev_delay_update_latency", 00:05:16.483 "bdev_zone_block_delete", 00:05:16.483 "bdev_zone_block_create", 00:05:16.483 "blobfs_create", 00:05:16.483 "blobfs_detect", 00:05:16.483 "blobfs_set_cache_size", 00:05:16.483 "bdev_aio_delete", 00:05:16.483 "bdev_aio_rescan", 00:05:16.483 "bdev_aio_create", 00:05:16.483 "bdev_ftl_set_property", 00:05:16.483 "bdev_ftl_get_properties", 00:05:16.483 "bdev_ftl_get_stats", 00:05:16.483 "bdev_ftl_unmap", 00:05:16.483 "bdev_ftl_unload", 00:05:16.483 "bdev_ftl_delete", 00:05:16.483 "bdev_ftl_load", 00:05:16.483 "bdev_ftl_create", 00:05:16.483 "bdev_virtio_attach_controller", 00:05:16.483 "bdev_virtio_scsi_get_devices", 00:05:16.483 "bdev_virtio_detach_controller", 00:05:16.483 "bdev_virtio_blk_set_hotplug", 00:05:16.483 "bdev_iscsi_delete", 00:05:16.483 "bdev_iscsi_create", 00:05:16.483 "bdev_iscsi_set_options", 00:05:16.483 "accel_error_inject_error", 00:05:16.483 "ioat_scan_accel_module", 00:05:16.483 "dsa_scan_accel_module", 00:05:16.483 "iaa_scan_accel_module", 00:05:16.483 "keyring_file_remove_key", 00:05:16.483 "keyring_file_add_key", 00:05:16.483 "keyring_linux_set_options", 00:05:16.483 "fsdev_aio_delete", 00:05:16.483 "fsdev_aio_create", 00:05:16.483 "iscsi_get_histogram", 00:05:16.483 "iscsi_enable_histogram", 00:05:16.483 "iscsi_set_options", 00:05:16.483 "iscsi_get_auth_groups", 00:05:16.483 "iscsi_auth_group_remove_secret", 00:05:16.483 "iscsi_auth_group_add_secret", 00:05:16.483 "iscsi_delete_auth_group", 00:05:16.483 "iscsi_create_auth_group", 00:05:16.483 "iscsi_set_discovery_auth", 00:05:16.483 "iscsi_get_options", 00:05:16.483 "iscsi_target_node_request_logout", 00:05:16.483 "iscsi_target_node_set_redirect", 00:05:16.483 "iscsi_target_node_set_auth", 00:05:16.483 "iscsi_target_node_add_lun", 00:05:16.483 "iscsi_get_stats", 00:05:16.483 "iscsi_get_connections", 00:05:16.483 "iscsi_portal_group_set_auth", 00:05:16.483 "iscsi_start_portal_group", 00:05:16.483 "iscsi_delete_portal_group", 00:05:16.483 "iscsi_create_portal_group", 00:05:16.483 "iscsi_get_portal_groups", 00:05:16.483 "iscsi_delete_target_node", 00:05:16.483 "iscsi_target_node_remove_pg_ig_maps", 00:05:16.483 "iscsi_target_node_add_pg_ig_maps", 00:05:16.483 "iscsi_create_target_node", 00:05:16.483 "iscsi_get_target_nodes", 00:05:16.483 "iscsi_delete_initiator_group", 00:05:16.483 "iscsi_initiator_group_remove_initiators", 00:05:16.483 "iscsi_initiator_group_add_initiators", 00:05:16.483 "iscsi_create_initiator_group", 00:05:16.483 "iscsi_get_initiator_groups", 00:05:16.483 "nvmf_set_crdt", 00:05:16.483 "nvmf_set_config", 00:05:16.483 "nvmf_set_max_subsystems", 00:05:16.483 "nvmf_stop_mdns_prr", 00:05:16.483 "nvmf_publish_mdns_prr", 00:05:16.483 "nvmf_subsystem_get_listeners", 00:05:16.483 "nvmf_subsystem_get_qpairs", 00:05:16.483 "nvmf_subsystem_get_controllers", 00:05:16.483 "nvmf_get_stats", 00:05:16.483 "nvmf_get_transports", 00:05:16.483 "nvmf_create_transport", 00:05:16.483 "nvmf_get_targets", 00:05:16.483 "nvmf_delete_target", 00:05:16.483 "nvmf_create_target", 00:05:16.483 "nvmf_subsystem_allow_any_host", 00:05:16.483 "nvmf_subsystem_set_keys", 00:05:16.483 "nvmf_subsystem_remove_host", 00:05:16.483 "nvmf_subsystem_add_host", 00:05:16.483 "nvmf_ns_remove_host", 00:05:16.483 "nvmf_ns_add_host", 00:05:16.483 "nvmf_subsystem_remove_ns", 00:05:16.483 "nvmf_subsystem_set_ns_ana_group", 00:05:16.483 "nvmf_subsystem_add_ns", 00:05:16.483 "nvmf_subsystem_listener_set_ana_state", 00:05:16.483 "nvmf_discovery_get_referrals", 00:05:16.483 "nvmf_discovery_remove_referral", 00:05:16.483 "nvmf_discovery_add_referral", 00:05:16.483 "nvmf_subsystem_remove_listener", 00:05:16.483 "nvmf_subsystem_add_listener", 00:05:16.483 "nvmf_delete_subsystem", 00:05:16.483 "nvmf_create_subsystem", 00:05:16.483 "nvmf_get_subsystems", 00:05:16.483 "env_dpdk_get_mem_stats", 00:05:16.483 "nbd_get_disks", 00:05:16.483 "nbd_stop_disk", 00:05:16.483 "nbd_start_disk", 00:05:16.483 "ublk_recover_disk", 00:05:16.483 "ublk_get_disks", 00:05:16.483 "ublk_stop_disk", 00:05:16.483 "ublk_start_disk", 00:05:16.483 "ublk_destroy_target", 00:05:16.483 "ublk_create_target", 00:05:16.483 "virtio_blk_create_transport", 00:05:16.483 "virtio_blk_get_transports", 00:05:16.483 "vhost_controller_set_coalescing", 00:05:16.483 "vhost_get_controllers", 00:05:16.483 "vhost_delete_controller", 00:05:16.483 "vhost_create_blk_controller", 00:05:16.483 "vhost_scsi_controller_remove_target", 00:05:16.483 "vhost_scsi_controller_add_target", 00:05:16.483 "vhost_start_scsi_controller", 00:05:16.483 "vhost_create_scsi_controller", 00:05:16.483 "thread_set_cpumask", 00:05:16.483 "scheduler_set_options", 00:05:16.483 "framework_get_governor", 00:05:16.483 "framework_get_scheduler", 00:05:16.483 "framework_set_scheduler", 00:05:16.483 "framework_get_reactors", 00:05:16.483 "thread_get_io_channels", 00:05:16.483 "thread_get_pollers", 00:05:16.483 "thread_get_stats", 00:05:16.483 "framework_monitor_context_switch", 00:05:16.483 "spdk_kill_instance", 00:05:16.483 "log_enable_timestamps", 00:05:16.483 "log_get_flags", 00:05:16.483 "log_clear_flag", 00:05:16.483 "log_set_flag", 00:05:16.483 "log_get_level", 00:05:16.483 "log_set_level", 00:05:16.483 "log_get_print_level", 00:05:16.483 "log_set_print_level", 00:05:16.483 "framework_enable_cpumask_locks", 00:05:16.483 "framework_disable_cpumask_locks", 00:05:16.483 "framework_wait_init", 00:05:16.483 "framework_start_init", 00:05:16.483 "scsi_get_devices", 00:05:16.483 "bdev_get_histogram", 00:05:16.483 "bdev_enable_histogram", 00:05:16.483 "bdev_set_qos_limit", 00:05:16.483 "bdev_set_qd_sampling_period", 00:05:16.483 "bdev_get_bdevs", 00:05:16.483 "bdev_reset_iostat", 00:05:16.483 "bdev_get_iostat", 00:05:16.483 "bdev_examine", 00:05:16.483 "bdev_wait_for_examine", 00:05:16.483 "bdev_set_options", 00:05:16.483 "accel_get_stats", 00:05:16.483 "accel_set_options", 00:05:16.483 "accel_set_driver", 00:05:16.483 "accel_crypto_key_destroy", 00:05:16.483 "accel_crypto_keys_get", 00:05:16.483 "accel_crypto_key_create", 00:05:16.484 "accel_assign_opc", 00:05:16.484 "accel_get_module_info", 00:05:16.484 "accel_get_opc_assignments", 00:05:16.484 "vmd_rescan", 00:05:16.484 "vmd_remove_device", 00:05:16.484 "vmd_enable", 00:05:16.484 "sock_get_default_impl", 00:05:16.484 "sock_set_default_impl", 00:05:16.484 "sock_impl_set_options", 00:05:16.484 "sock_impl_get_options", 00:05:16.484 "iobuf_get_stats", 00:05:16.484 "iobuf_set_options", 00:05:16.484 "keyring_get_keys", 00:05:16.484 "framework_get_pci_devices", 00:05:16.484 "framework_get_config", 00:05:16.484 "framework_get_subsystems", 00:05:16.484 "fsdev_set_opts", 00:05:16.484 "fsdev_get_opts", 00:05:16.484 "trace_get_info", 00:05:16.484 "trace_get_tpoint_group_mask", 00:05:16.484 "trace_disable_tpoint_group", 00:05:16.484 "trace_enable_tpoint_group", 00:05:16.484 "trace_clear_tpoint_mask", 00:05:16.484 "trace_set_tpoint_mask", 00:05:16.484 "notify_get_notifications", 00:05:16.484 "notify_get_types", 00:05:16.484 "spdk_get_version", 00:05:16.484 "rpc_get_methods" 00:05:16.484 ] 00:05:16.484 18:02:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:16.484 18:02:28 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.484 18:02:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.484 18:02:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.484 18:02:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58117 00:05:16.484 18:02:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58117 ']' 00:05:16.484 18:02:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58117 00:05:16.484 18:02:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:16.484 18:02:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.484 18:02:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58117 00:05:16.741 killing process with pid 58117 00:05:16.741 18:02:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.741 18:02:28 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.741 18:02:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58117' 00:05:16.741 18:02:28 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58117 00:05:16.741 18:02:28 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58117 00:05:20.028 ************************************ 00:05:20.028 END TEST spdkcli_tcp 00:05:20.028 ************************************ 00:05:20.028 00:05:20.028 real 0m4.889s 00:05:20.028 user 0m8.944s 00:05:20.028 sys 0m0.649s 00:05:20.028 18:02:31 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.028 18:02:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.028 18:02:31 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:20.028 18:02:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.028 18:02:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.028 18:02:31 -- common/autotest_common.sh@10 -- # set +x 00:05:20.028 ************************************ 00:05:20.028 START TEST dpdk_mem_utility 00:05:20.028 ************************************ 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:20.028 * Looking for test storage... 00:05:20.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.028 18:02:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.028 --rc genhtml_branch_coverage=1 00:05:20.028 --rc genhtml_function_coverage=1 00:05:20.028 --rc genhtml_legend=1 00:05:20.028 --rc geninfo_all_blocks=1 00:05:20.028 --rc geninfo_unexecuted_blocks=1 00:05:20.028 00:05:20.028 ' 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.028 --rc genhtml_branch_coverage=1 00:05:20.028 --rc genhtml_function_coverage=1 00:05:20.028 --rc genhtml_legend=1 00:05:20.028 --rc geninfo_all_blocks=1 00:05:20.028 --rc geninfo_unexecuted_blocks=1 00:05:20.028 00:05:20.028 ' 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.028 --rc genhtml_branch_coverage=1 00:05:20.028 --rc genhtml_function_coverage=1 00:05:20.028 --rc genhtml_legend=1 00:05:20.028 --rc geninfo_all_blocks=1 00:05:20.028 --rc geninfo_unexecuted_blocks=1 00:05:20.028 00:05:20.028 ' 00:05:20.028 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.028 --rc genhtml_branch_coverage=1 00:05:20.028 --rc genhtml_function_coverage=1 00:05:20.028 --rc genhtml_legend=1 00:05:20.028 --rc geninfo_all_blocks=1 00:05:20.028 --rc geninfo_unexecuted_blocks=1 00:05:20.028 00:05:20.028 ' 00:05:20.029 18:02:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:20.029 18:02:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.029 18:02:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58245 00:05:20.029 18:02:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58245 00:05:20.029 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58245 ']' 00:05:20.029 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.029 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.029 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.029 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.029 18:02:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.029 [2024-12-06 18:02:31.860351] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:20.029 [2024-12-06 18:02:31.860631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58245 ] 00:05:20.029 [2024-12-06 18:02:32.030307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.029 [2024-12-06 18:02:32.171884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.500 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.500 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:21.500 18:02:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:21.500 18:02:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:21.500 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.500 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.500 { 00:05:21.500 "filename": "/tmp/spdk_mem_dump.txt" 00:05:21.500 } 00:05:21.500 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.500 18:02:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:21.500 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:21.500 1 heaps totaling size 824.000000 MiB 00:05:21.500 size: 824.000000 MiB heap id: 0 00:05:21.500 end heaps---------- 00:05:21.500 9 mempools totaling size 603.782043 MiB 00:05:21.500 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:21.500 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:21.500 size: 100.555481 MiB name: bdev_io_58245 00:05:21.500 size: 50.003479 MiB name: msgpool_58245 00:05:21.500 size: 36.509338 MiB name: fsdev_io_58245 00:05:21.500 size: 21.763794 MiB name: PDU_Pool 00:05:21.500 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:21.500 size: 4.133484 MiB name: evtpool_58245 00:05:21.500 size: 0.026123 MiB name: Session_Pool 00:05:21.500 end mempools------- 00:05:21.500 6 memzones totaling size 4.142822 MiB 00:05:21.500 size: 1.000366 MiB name: RG_ring_0_58245 00:05:21.500 size: 1.000366 MiB name: RG_ring_1_58245 00:05:21.500 size: 1.000366 MiB name: RG_ring_4_58245 00:05:21.500 size: 1.000366 MiB name: RG_ring_5_58245 00:05:21.500 size: 0.125366 MiB name: RG_ring_2_58245 00:05:21.500 size: 0.015991 MiB name: RG_ring_3_58245 00:05:21.500 end memzones------- 00:05:21.500 18:02:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:21.500 heap id: 0 total size: 824.000000 MiB number of busy elements: 319 number of free elements: 18 00:05:21.500 list of free elements. size: 16.780396 MiB 00:05:21.500 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:21.500 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:21.500 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:21.500 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:21.500 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:21.500 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:21.500 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:21.500 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:21.500 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:21.500 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:21.500 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:21.500 element at address: 0x20001b400000 with size: 0.561951 MiB 00:05:21.500 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:21.500 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:21.500 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:21.500 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:21.500 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:21.500 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:21.500 list of standard malloc elements. size: 199.288696 MiB 00:05:21.500 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:21.500 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:21.500 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:21.500 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:21.500 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:21.500 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:21.500 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:21.500 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:21.500 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:21.500 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:21.500 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:21.500 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:21.500 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:21.500 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:21.501 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:21.501 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:21.502 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:21.502 list of memzone associated elements. size: 607.930908 MiB 00:05:21.502 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:21.502 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:21.502 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:21.502 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:21.502 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:21.502 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58245_0 00:05:21.502 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:21.502 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58245_0 00:05:21.502 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:21.502 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58245_0 00:05:21.502 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:21.502 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:21.502 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:21.502 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:21.502 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:21.502 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58245_0 00:05:21.502 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:21.502 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58245 00:05:21.502 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:21.502 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58245 00:05:21.502 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:21.502 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:21.502 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:21.502 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:21.502 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:21.502 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:21.502 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:21.502 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:21.502 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:21.502 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58245 00:05:21.502 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:21.502 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58245 00:05:21.502 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:21.502 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58245 00:05:21.502 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:21.502 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58245 00:05:21.502 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:21.502 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58245 00:05:21.502 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:21.502 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58245 00:05:21.502 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:21.502 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:21.502 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:21.502 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:21.502 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:21.502 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:21.502 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:21.502 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58245 00:05:21.502 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:21.502 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58245 00:05:21.502 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:21.502 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:21.502 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:21.502 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:21.502 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:21.502 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58245 00:05:21.502 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:21.502 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:21.502 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:21.502 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58245 00:05:21.502 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:21.502 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58245 00:05:21.502 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:21.502 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58245 00:05:21.502 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:21.502 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:21.502 18:02:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:21.502 18:02:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58245 00:05:21.502 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58245 ']' 00:05:21.503 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58245 00:05:21.503 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:21.503 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.503 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58245 00:05:21.503 killing process with pid 58245 00:05:21.503 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.503 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.503 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58245' 00:05:21.503 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58245 00:05:21.503 18:02:33 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58245 00:05:24.812 00:05:24.812 real 0m4.755s 00:05:24.812 user 0m4.816s 00:05:24.812 sys 0m0.594s 00:05:24.812 ************************************ 00:05:24.812 END TEST dpdk_mem_utility 00:05:24.812 ************************************ 00:05:24.812 18:02:36 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.812 18:02:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.812 18:02:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:24.812 18:02:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.812 18:02:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.812 18:02:36 -- common/autotest_common.sh@10 -- # set +x 00:05:24.812 ************************************ 00:05:24.812 START TEST event 00:05:24.812 ************************************ 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:24.812 * Looking for test storage... 00:05:24.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.812 18:02:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.812 18:02:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.812 18:02:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.812 18:02:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.812 18:02:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.812 18:02:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.812 18:02:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.812 18:02:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.812 18:02:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.812 18:02:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.812 18:02:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.812 18:02:36 event -- scripts/common.sh@344 -- # case "$op" in 00:05:24.812 18:02:36 event -- scripts/common.sh@345 -- # : 1 00:05:24.812 18:02:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.812 18:02:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.812 18:02:36 event -- scripts/common.sh@365 -- # decimal 1 00:05:24.812 18:02:36 event -- scripts/common.sh@353 -- # local d=1 00:05:24.812 18:02:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.812 18:02:36 event -- scripts/common.sh@355 -- # echo 1 00:05:24.812 18:02:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.812 18:02:36 event -- scripts/common.sh@366 -- # decimal 2 00:05:24.812 18:02:36 event -- scripts/common.sh@353 -- # local d=2 00:05:24.812 18:02:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.812 18:02:36 event -- scripts/common.sh@355 -- # echo 2 00:05:24.812 18:02:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.812 18:02:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.812 18:02:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.812 18:02:36 event -- scripts/common.sh@368 -- # return 0 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.812 --rc genhtml_branch_coverage=1 00:05:24.812 --rc genhtml_function_coverage=1 00:05:24.812 --rc genhtml_legend=1 00:05:24.812 --rc geninfo_all_blocks=1 00:05:24.812 --rc geninfo_unexecuted_blocks=1 00:05:24.812 00:05:24.812 ' 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.812 --rc genhtml_branch_coverage=1 00:05:24.812 --rc genhtml_function_coverage=1 00:05:24.812 --rc genhtml_legend=1 00:05:24.812 --rc geninfo_all_blocks=1 00:05:24.812 --rc geninfo_unexecuted_blocks=1 00:05:24.812 00:05:24.812 ' 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.812 --rc genhtml_branch_coverage=1 00:05:24.812 --rc genhtml_function_coverage=1 00:05:24.812 --rc genhtml_legend=1 00:05:24.812 --rc geninfo_all_blocks=1 00:05:24.812 --rc geninfo_unexecuted_blocks=1 00:05:24.812 00:05:24.812 ' 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.812 --rc genhtml_branch_coverage=1 00:05:24.812 --rc genhtml_function_coverage=1 00:05:24.812 --rc genhtml_legend=1 00:05:24.812 --rc geninfo_all_blocks=1 00:05:24.812 --rc geninfo_unexecuted_blocks=1 00:05:24.812 00:05:24.812 ' 00:05:24.812 18:02:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:24.812 18:02:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:24.812 18:02:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:24.812 18:02:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.812 18:02:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.812 ************************************ 00:05:24.812 START TEST event_perf 00:05:24.812 ************************************ 00:05:24.812 18:02:36 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.812 Running I/O for 1 seconds...[2024-12-06 18:02:36.659609] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:24.812 [2024-12-06 18:02:36.659944] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58364 ] 00:05:24.812 [2024-12-06 18:02:36.859459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.070 [2024-12-06 18:02:37.009451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.070 [2024-12-06 18:02:37.009493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.070 [2024-12-06 18:02:37.009620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.070 Running I/O for 1 seconds...[2024-12-06 18:02:37.009663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.445 00:05:26.445 lcore 0: 164816 00:05:26.445 lcore 1: 164815 00:05:26.445 lcore 2: 164816 00:05:26.445 lcore 3: 164815 00:05:26.445 done. 00:05:26.445 00:05:26.445 real 0m1.688s 00:05:26.445 user 0m4.428s 00:05:26.445 sys 0m0.123s 00:05:26.445 18:02:38 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.445 18:02:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.445 ************************************ 00:05:26.445 END TEST event_perf 00:05:26.445 ************************************ 00:05:26.445 18:02:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.445 18:02:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:26.445 18:02:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.445 18:02:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.445 ************************************ 00:05:26.445 START TEST event_reactor 00:05:26.445 ************************************ 00:05:26.445 18:02:38 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.445 [2024-12-06 18:02:38.403195] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:26.445 [2024-12-06 18:02:38.403393] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58409 ] 00:05:26.445 [2024-12-06 18:02:38.597985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.704 [2024-12-06 18:02:38.737468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.091 test_start 00:05:28.091 oneshot 00:05:28.091 tick 100 00:05:28.091 tick 100 00:05:28.091 tick 250 00:05:28.091 tick 100 00:05:28.091 tick 100 00:05:28.091 tick 100 00:05:28.091 tick 250 00:05:28.091 tick 500 00:05:28.091 tick 100 00:05:28.091 tick 100 00:05:28.091 tick 250 00:05:28.091 tick 100 00:05:28.091 tick 100 00:05:28.091 test_end 00:05:28.091 ************************************ 00:05:28.091 END TEST event_reactor 00:05:28.091 ************************************ 00:05:28.091 00:05:28.091 real 0m1.635s 00:05:28.091 user 0m1.429s 00:05:28.091 sys 0m0.096s 00:05:28.091 18:02:39 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.091 18:02:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:28.091 18:02:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.091 18:02:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:28.091 18:02:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.091 18:02:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.091 ************************************ 00:05:28.092 START TEST event_reactor_perf 00:05:28.092 ************************************ 00:05:28.092 18:02:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.092 [2024-12-06 18:02:40.099699] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:28.092 [2024-12-06 18:02:40.099945] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58440 ] 00:05:28.349 [2024-12-06 18:02:40.282039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.349 [2024-12-06 18:02:40.415212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.727 test_start 00:05:29.727 test_end 00:05:29.727 Performance: 299193 events per second 00:05:29.727 00:05:29.727 real 0m1.628s 00:05:29.727 user 0m1.418s 00:05:29.727 sys 0m0.100s 00:05:29.727 ************************************ 00:05:29.727 END TEST event_reactor_perf 00:05:29.727 ************************************ 00:05:29.727 18:02:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.727 18:02:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.727 18:02:41 event -- event/event.sh@49 -- # uname -s 00:05:29.727 18:02:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:29.727 18:02:41 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.727 18:02:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.727 18:02:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.727 18:02:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.727 ************************************ 00:05:29.727 START TEST event_scheduler 00:05:29.727 ************************************ 00:05:29.727 18:02:41 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.727 * Looking for test storage... 00:05:29.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:29.727 18:02:41 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.727 18:02:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.727 18:02:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.986 18:02:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.986 --rc genhtml_branch_coverage=1 00:05:29.986 --rc genhtml_function_coverage=1 00:05:29.986 --rc genhtml_legend=1 00:05:29.986 --rc geninfo_all_blocks=1 00:05:29.986 --rc geninfo_unexecuted_blocks=1 00:05:29.986 00:05:29.986 ' 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.986 --rc genhtml_branch_coverage=1 00:05:29.986 --rc genhtml_function_coverage=1 00:05:29.986 --rc genhtml_legend=1 00:05:29.986 --rc geninfo_all_blocks=1 00:05:29.986 --rc geninfo_unexecuted_blocks=1 00:05:29.986 00:05:29.986 ' 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.986 --rc genhtml_branch_coverage=1 00:05:29.986 --rc genhtml_function_coverage=1 00:05:29.986 --rc genhtml_legend=1 00:05:29.986 --rc geninfo_all_blocks=1 00:05:29.986 --rc geninfo_unexecuted_blocks=1 00:05:29.986 00:05:29.986 ' 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.986 --rc genhtml_branch_coverage=1 00:05:29.986 --rc genhtml_function_coverage=1 00:05:29.986 --rc genhtml_legend=1 00:05:29.986 --rc geninfo_all_blocks=1 00:05:29.986 --rc geninfo_unexecuted_blocks=1 00:05:29.986 00:05:29.986 ' 00:05:29.986 18:02:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:29.986 18:02:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:29.986 18:02:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58516 00:05:29.986 18:02:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.986 18:02:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58516 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58516 ']' 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.986 18:02:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.986 [2024-12-06 18:02:42.082044] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:29.986 [2024-12-06 18:02:42.082321] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58516 ] 00:05:30.245 [2024-12-06 18:02:42.256716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.245 [2024-12-06 18:02:42.403611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.245 [2024-12-06 18:02:42.403744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.245 [2024-12-06 18:02:42.403755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.245 [2024-12-06 18:02:42.403745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.181 18:02:43 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.181 18:02:43 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:31.181 18:02:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:31.181 18:02:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.181 18:02:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.181 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.181 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.181 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.181 POWER: Cannot set governor of lcore 0 to performance 00:05:31.181 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.181 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.181 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:31.181 POWER: Cannot set governor of lcore 0 to userspace 00:05:31.181 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:31.181 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:31.181 POWER: Unable to set Power Management Environment for lcore 0 00:05:31.181 [2024-12-06 18:02:43.052885] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:31.181 [2024-12-06 18:02:43.052914] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:31.181 [2024-12-06 18:02:43.052927] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:31.181 [2024-12-06 18:02:43.052952] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:31.181 [2024-12-06 18:02:43.052962] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:31.181 [2024-12-06 18:02:43.052973] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:31.181 18:02:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.181 18:02:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:31.181 18:02:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.181 18:02:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.440 [2024-12-06 18:02:43.440181] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:31.440 18:02:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.440 18:02:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:31.440 18:02:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.440 18:02:43 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.440 18:02:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.440 ************************************ 00:05:31.440 START TEST scheduler_create_thread 00:05:31.440 ************************************ 00:05:31.440 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:31.440 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:31.440 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.440 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.440 2 00:05:31.440 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 3 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 4 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 5 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 6 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 7 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 8 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 9 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 10 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.441 18:02:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.345 18:02:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.345 18:02:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:33.345 18:02:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:33.345 18:02:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.345 18:02:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.297 ************************************ 00:05:34.297 END TEST scheduler_create_thread 00:05:34.297 ************************************ 00:05:34.297 18:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.297 00:05:34.297 real 0m2.624s 00:05:34.297 user 0m0.019s 00:05:34.297 sys 0m0.009s 00:05:34.297 18:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.297 18:02:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.297 18:02:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.297 18:02:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58516 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58516 ']' 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58516 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58516 00:05:34.297 killing process with pid 58516 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58516' 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58516 00:05:34.297 18:02:46 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58516 00:05:34.297 [2024-12-06 18:02:46.460511] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.198 00:05:36.198 real 0m6.289s 00:05:36.198 user 0m13.819s 00:05:36.198 sys 0m0.513s 00:05:36.198 18:02:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.198 ************************************ 00:05:36.198 END TEST event_scheduler 00:05:36.198 ************************************ 00:05:36.198 18:02:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.198 18:02:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.198 18:02:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.198 18:02:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.198 18:02:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.198 18:02:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.198 ************************************ 00:05:36.198 START TEST app_repeat 00:05:36.198 ************************************ 00:05:36.198 18:02:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58633 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:36.198 Process app_repeat pid: 58633 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58633' 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:36.198 spdk_app_start Round 0 00:05:36.198 18:02:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58633 /var/tmp/spdk-nbd.sock 00:05:36.198 18:02:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58633 ']' 00:05:36.198 18:02:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.198 18:02:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.198 18:02:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.198 18:02:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.198 18:02:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.198 [2024-12-06 18:02:48.172697] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:36.198 [2024-12-06 18:02:48.173018] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58633 ] 00:05:36.457 [2024-12-06 18:02:48.363285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.457 [2024-12-06 18:02:48.516926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.457 [2024-12-06 18:02:48.516936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.023 18:02:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.023 18:02:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.023 18:02:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.590 Malloc0 00:05:37.590 18:02:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.857 Malloc1 00:05:37.857 18:02:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.857 18:02:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.858 18:02:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.130 /dev/nbd0 00:05:38.130 18:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.130 18:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.130 1+0 records in 00:05:38.130 1+0 records out 00:05:38.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300346 s, 13.6 MB/s 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.130 18:02:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.130 18:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.130 18:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.130 18:02:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.388 /dev/nbd1 00:05:38.646 18:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.646 18:02:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.646 1+0 records in 00:05:38.646 1+0 records out 00:05:38.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306118 s, 13.4 MB/s 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.646 18:02:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.646 18:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.646 18:02:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.646 18:02:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.646 18:02:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.646 18:02:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.904 { 00:05:38.904 "nbd_device": "/dev/nbd0", 00:05:38.904 "bdev_name": "Malloc0" 00:05:38.904 }, 00:05:38.904 { 00:05:38.904 "nbd_device": "/dev/nbd1", 00:05:38.904 "bdev_name": "Malloc1" 00:05:38.904 } 00:05:38.904 ]' 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.904 { 00:05:38.904 "nbd_device": "/dev/nbd0", 00:05:38.904 "bdev_name": "Malloc0" 00:05:38.904 }, 00:05:38.904 { 00:05:38.904 "nbd_device": "/dev/nbd1", 00:05:38.904 "bdev_name": "Malloc1" 00:05:38.904 } 00:05:38.904 ]' 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.904 /dev/nbd1' 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.904 /dev/nbd1' 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.904 18:02:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.905 256+0 records in 00:05:38.905 256+0 records out 00:05:38.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604882 s, 173 MB/s 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.905 256+0 records in 00:05:38.905 256+0 records out 00:05:38.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298188 s, 35.2 MB/s 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.905 256+0 records in 00:05:38.905 256+0 records out 00:05:38.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0409557 s, 25.6 MB/s 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.905 18:02:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.905 18:02:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.473 18:02:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.042 18:02:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.042 18:02:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.610 18:02:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.988 [2024-12-06 18:02:54.015894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.246 [2024-12-06 18:02:54.175165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.246 [2024-12-06 18:02:54.175165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.505 [2024-12-06 18:02:54.433041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.505 [2024-12-06 18:02:54.433241] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.440 spdk_app_start Round 1 00:05:43.440 18:02:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.440 18:02:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.440 18:02:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58633 /var/tmp/spdk-nbd.sock 00:05:43.440 18:02:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58633 ']' 00:05:43.440 18:02:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.440 18:02:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.440 18:02:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.440 18:02:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.440 18:02:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.005 18:02:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.005 18:02:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.005 18:02:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.262 Malloc0 00:05:44.263 18:02:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.521 Malloc1 00:05:44.521 18:02:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.521 18:02:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.780 /dev/nbd0 00:05:44.780 18:02:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.780 18:02:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.780 1+0 records in 00:05:44.780 1+0 records out 00:05:44.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339899 s, 12.1 MB/s 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.780 18:02:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.780 18:02:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.780 18:02:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.780 18:02:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.349 /dev/nbd1 00:05:45.350 18:02:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.350 18:02:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.350 1+0 records in 00:05:45.350 1+0 records out 00:05:45.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649974 s, 6.3 MB/s 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.350 18:02:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.350 18:02:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.350 18:02:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.350 18:02:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.350 18:02:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.350 18:02:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.609 { 00:05:45.609 "nbd_device": "/dev/nbd0", 00:05:45.609 "bdev_name": "Malloc0" 00:05:45.609 }, 00:05:45.609 { 00:05:45.609 "nbd_device": "/dev/nbd1", 00:05:45.609 "bdev_name": "Malloc1" 00:05:45.609 } 00:05:45.609 ]' 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.609 { 00:05:45.609 "nbd_device": "/dev/nbd0", 00:05:45.609 "bdev_name": "Malloc0" 00:05:45.609 }, 00:05:45.609 { 00:05:45.609 "nbd_device": "/dev/nbd1", 00:05:45.609 "bdev_name": "Malloc1" 00:05:45.609 } 00:05:45.609 ]' 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.609 /dev/nbd1' 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.609 /dev/nbd1' 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.609 256+0 records in 00:05:45.609 256+0 records out 00:05:45.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00625457 s, 168 MB/s 00:05:45.609 18:02:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.610 256+0 records in 00:05:45.610 256+0 records out 00:05:45.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262944 s, 39.9 MB/s 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.610 256+0 records in 00:05:45.610 256+0 records out 00:05:45.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252504 s, 41.5 MB/s 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.610 18:02:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.936 18:02:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.517 18:02:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.776 18:02:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.776 18:02:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.341 18:02:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.714 [2024-12-06 18:03:00.696953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.714 [2024-12-06 18:03:00.833471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.714 [2024-12-06 18:03:00.833497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.972 [2024-12-06 18:03:01.065576] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.972 [2024-12-06 18:03:01.065714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.344 spdk_app_start Round 2 00:05:50.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.344 18:03:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.344 18:03:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.344 18:03:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58633 /var/tmp/spdk-nbd.sock 00:05:50.344 18:03:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58633 ']' 00:05:50.344 18:03:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.344 18:03:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.344 18:03:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.344 18:03:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.344 18:03:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.601 18:03:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.601 18:03:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.601 18:03:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.859 Malloc0 00:05:51.117 18:03:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.375 Malloc1 00:05:51.376 18:03:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.376 18:03:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.635 /dev/nbd0 00:05:51.635 18:03:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.635 18:03:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.635 18:03:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.635 18:03:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.635 18:03:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.635 18:03:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.635 18:03:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.893 18:03:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.893 18:03:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.893 18:03:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.893 18:03:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.893 1+0 records in 00:05:51.893 1+0 records out 00:05:51.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048713 s, 8.4 MB/s 00:05:51.893 18:03:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.893 18:03:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.893 18:03:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.893 18:03:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.893 18:03:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.893 18:03:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.893 18:03:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.893 18:03:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.152 /dev/nbd1 00:05:52.152 18:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.152 18:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.152 1+0 records in 00:05:52.152 1+0 records out 00:05:52.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458448 s, 8.9 MB/s 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.152 18:03:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.152 18:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.152 18:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.152 18:03:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.152 18:03:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.152 18:03:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.719 { 00:05:52.719 "nbd_device": "/dev/nbd0", 00:05:52.719 "bdev_name": "Malloc0" 00:05:52.719 }, 00:05:52.719 { 00:05:52.719 "nbd_device": "/dev/nbd1", 00:05:52.719 "bdev_name": "Malloc1" 00:05:52.719 } 00:05:52.719 ]' 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.719 { 00:05:52.719 "nbd_device": "/dev/nbd0", 00:05:52.719 "bdev_name": "Malloc0" 00:05:52.719 }, 00:05:52.719 { 00:05:52.719 "nbd_device": "/dev/nbd1", 00:05:52.719 "bdev_name": "Malloc1" 00:05:52.719 } 00:05:52.719 ]' 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.719 /dev/nbd1' 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.719 /dev/nbd1' 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.719 256+0 records in 00:05:52.719 256+0 records out 00:05:52.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135772 s, 77.2 MB/s 00:05:52.719 18:03:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.720 256+0 records in 00:05:52.720 256+0 records out 00:05:52.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272024 s, 38.5 MB/s 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.720 256+0 records in 00:05:52.720 256+0 records out 00:05:52.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282898 s, 37.1 MB/s 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.720 18:03:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.978 18:03:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.236 18:03:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.801 18:03:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.801 18:03:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.364 18:03:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.738 [2024-12-06 18:03:07.648447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.738 [2024-12-06 18:03:07.788469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.738 [2024-12-06 18:03:07.788475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.997 [2024-12-06 18:03:08.019659] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.997 [2024-12-06 18:03:08.019775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.376 18:03:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58633 /var/tmp/spdk-nbd.sock 00:05:57.376 18:03:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58633 ']' 00:05:57.376 18:03:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.376 18:03:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.376 18:03:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.376 18:03:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.376 18:03:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.634 18:03:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.634 18:03:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:57.634 18:03:09 event.app_repeat -- event/event.sh@39 -- # killprocess 58633 00:05:57.634 18:03:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58633 ']' 00:05:57.634 18:03:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58633 00:05:57.634 18:03:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:57.634 18:03:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.634 18:03:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58633 00:05:57.634 18:03:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.635 18:03:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.635 killing process with pid 58633 00:05:57.635 18:03:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58633' 00:05:57.635 18:03:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58633 00:05:57.635 18:03:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58633 00:05:59.013 spdk_app_start is called in Round 0. 00:05:59.013 Shutdown signal received, stop current app iteration 00:05:59.013 Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 reinitialization... 00:05:59.013 spdk_app_start is called in Round 1. 00:05:59.013 Shutdown signal received, stop current app iteration 00:05:59.013 Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 reinitialization... 00:05:59.013 spdk_app_start is called in Round 2. 00:05:59.013 Shutdown signal received, stop current app iteration 00:05:59.013 Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 reinitialization... 00:05:59.013 spdk_app_start is called in Round 3. 00:05:59.013 Shutdown signal received, stop current app iteration 00:05:59.013 18:03:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:59.013 18:03:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:59.013 00:05:59.013 real 0m22.783s 00:05:59.013 user 0m50.074s 00:05:59.013 sys 0m3.408s 00:05:59.013 18:03:10 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.013 18:03:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.013 ************************************ 00:05:59.013 END TEST app_repeat 00:05:59.013 ************************************ 00:05:59.013 18:03:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:59.013 18:03:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:59.013 18:03:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.013 18:03:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.013 18:03:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.013 ************************************ 00:05:59.013 START TEST cpu_locks 00:05:59.013 ************************************ 00:05:59.013 18:03:10 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:59.013 * Looking for test storage... 00:05:59.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.013 18:03:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:59.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.013 --rc genhtml_branch_coverage=1 00:05:59.013 --rc genhtml_function_coverage=1 00:05:59.013 --rc genhtml_legend=1 00:05:59.013 --rc geninfo_all_blocks=1 00:05:59.013 --rc geninfo_unexecuted_blocks=1 00:05:59.013 00:05:59.013 ' 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:59.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.013 --rc genhtml_branch_coverage=1 00:05:59.013 --rc genhtml_function_coverage=1 00:05:59.013 --rc genhtml_legend=1 00:05:59.013 --rc geninfo_all_blocks=1 00:05:59.013 --rc geninfo_unexecuted_blocks=1 00:05:59.013 00:05:59.013 ' 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:59.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.013 --rc genhtml_branch_coverage=1 00:05:59.013 --rc genhtml_function_coverage=1 00:05:59.013 --rc genhtml_legend=1 00:05:59.013 --rc geninfo_all_blocks=1 00:05:59.013 --rc geninfo_unexecuted_blocks=1 00:05:59.013 00:05:59.013 ' 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:59.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.013 --rc genhtml_branch_coverage=1 00:05:59.013 --rc genhtml_function_coverage=1 00:05:59.013 --rc genhtml_legend=1 00:05:59.013 --rc geninfo_all_blocks=1 00:05:59.013 --rc geninfo_unexecuted_blocks=1 00:05:59.013 00:05:59.013 ' 00:05:59.013 18:03:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:59.013 18:03:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:59.013 18:03:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:59.013 18:03:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.013 18:03:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.013 ************************************ 00:05:59.013 START TEST default_locks 00:05:59.013 ************************************ 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59125 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59125 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59125 ']' 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.013 18:03:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.272 [2024-12-06 18:03:11.259822] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:05:59.272 [2024-12-06 18:03:11.259973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59125 ] 00:05:59.531 [2024-12-06 18:03:11.443575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.531 [2024-12-06 18:03:11.589775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.470 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.470 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:00.470 18:03:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59125 00:06:00.470 18:03:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59125 00:06:00.470 18:03:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59125 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59125 ']' 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59125 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59125 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.039 killing process with pid 59125 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59125' 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59125 00:06:01.039 18:03:12 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59125 00:06:04.327 18:03:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59125 00:06:04.327 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:04.327 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59125 00:06:04.327 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.327 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.327 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.327 ERROR: process (pid: 59125) is no longer running 00:06:04.327 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59125 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59125 ']' 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59125) - No such process 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.328 00:06:04.328 real 0m4.698s 00:06:04.328 user 0m4.669s 00:06:04.328 sys 0m0.696s 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.328 18:03:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.328 ************************************ 00:06:04.328 END TEST default_locks 00:06:04.328 ************************************ 00:06:04.328 18:03:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.328 18:03:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.328 18:03:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.328 18:03:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.328 ************************************ 00:06:04.328 START TEST default_locks_via_rpc 00:06:04.328 ************************************ 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59200 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59200 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59200 ']' 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.328 18:03:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.328 [2024-12-06 18:03:16.002919] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:04.328 [2024-12-06 18:03:16.003074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59200 ] 00:06:04.328 [2024-12-06 18:03:16.183664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.328 [2024-12-06 18:03:16.313224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59200 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59200 00:06:05.267 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59200 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59200 ']' 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59200 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59200 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.527 killing process with pid 59200 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59200' 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59200 00:06:05.527 18:03:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59200 00:06:08.063 00:06:08.063 real 0m4.323s 00:06:08.063 user 0m4.301s 00:06:08.063 sys 0m0.663s 00:06:08.063 18:03:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.063 18:03:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.063 ************************************ 00:06:08.063 END TEST default_locks_via_rpc 00:06:08.063 ************************************ 00:06:08.324 18:03:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.324 18:03:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.324 18:03:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.324 18:03:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.324 ************************************ 00:06:08.324 START TEST non_locking_app_on_locked_coremask 00:06:08.324 ************************************ 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59280 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59280 /var/tmp/spdk.sock 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59280 ']' 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.324 18:03:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.324 [2024-12-06 18:03:20.390693] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:08.324 [2024-12-06 18:03:20.390818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59280 ] 00:06:08.586 [2024-12-06 18:03:20.553150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.586 [2024-12-06 18:03:20.680293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59302 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59302 /var/tmp/spdk2.sock 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59302 ']' 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.522 18:03:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.781 [2024-12-06 18:03:21.735743] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:09.781 [2024-12-06 18:03:21.735886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:06:09.781 [2024-12-06 18:03:21.913148] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.781 [2024-12-06 18:03:21.913213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.039 [2024-12-06 18:03:22.160757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.596 18:03:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.596 18:03:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.596 18:03:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59280 00:06:12.596 18:03:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59280 00:06:12.596 18:03:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59280 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59280 ']' 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59280 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59280 00:06:13.168 killing process with pid 59280 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59280' 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59280 00:06:13.168 18:03:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59280 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59302 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59302 ']' 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59302 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59302 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.439 killing process with pid 59302 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59302' 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59302 00:06:18.439 18:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59302 00:06:21.727 00:06:21.727 real 0m12.965s 00:06:21.727 user 0m13.351s 00:06:21.727 sys 0m1.422s 00:06:21.727 18:03:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.727 18:03:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.727 ************************************ 00:06:21.727 END TEST non_locking_app_on_locked_coremask 00:06:21.727 ************************************ 00:06:21.727 18:03:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:21.727 18:03:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.727 18:03:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.727 18:03:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.727 ************************************ 00:06:21.727 START TEST locking_app_on_unlocked_coremask 00:06:21.727 ************************************ 00:06:21.727 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:21.727 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.727 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59461 00:06:21.727 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59461 /var/tmp/spdk.sock 00:06:21.727 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59461 ']' 00:06:21.727 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.727 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.728 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.728 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.728 18:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.728 [2024-12-06 18:03:33.407476] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:21.728 [2024-12-06 18:03:33.407624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59461 ] 00:06:21.728 [2024-12-06 18:03:33.589117] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.728 [2024-12-06 18:03:33.589195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.728 [2024-12-06 18:03:33.719494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59483 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59483 /var/tmp/spdk2.sock 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59483 ']' 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.665 18:03:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.665 [2024-12-06 18:03:34.789329] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:22.665 [2024-12-06 18:03:34.789900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59483 ] 00:06:22.923 [2024-12-06 18:03:34.969763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.179 [2024-12-06 18:03:35.221691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.737 18:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.737 18:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.737 18:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59483 00:06:25.737 18:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59483 00:06:25.737 18:03:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59461 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59461 ']' 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59461 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59461 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.994 killing process with pid 59461 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59461' 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59461 00:06:25.994 18:03:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59461 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59483 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59483 ']' 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59483 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59483 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.286 killing process with pid 59483 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59483' 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59483 00:06:31.286 18:03:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59483 00:06:34.593 00:06:34.593 real 0m12.705s 00:06:34.593 user 0m13.113s 00:06:34.593 sys 0m1.271s 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.593 ************************************ 00:06:34.593 END TEST locking_app_on_unlocked_coremask 00:06:34.593 ************************************ 00:06:34.593 18:03:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:34.593 18:03:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.593 18:03:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.593 18:03:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.593 ************************************ 00:06:34.593 START TEST locking_app_on_locked_coremask 00:06:34.593 ************************************ 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59642 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59642 /var/tmp/spdk.sock 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59642 ']' 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.593 18:03:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.593 [2024-12-06 18:03:46.187549] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:34.593 [2024-12-06 18:03:46.187695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59642 ] 00:06:34.593 [2024-12-06 18:03:46.363494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.593 [2024-12-06 18:03:46.491684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59658 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59658 /var/tmp/spdk2.sock 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59658 /var/tmp/spdk2.sock 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59658 /var/tmp/spdk2.sock 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59658 ']' 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.530 18:03:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.530 [2024-12-06 18:03:47.581306] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:35.531 [2024-12-06 18:03:47.581439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59658 ] 00:06:35.790 [2024-12-06 18:03:47.766840] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59642 has claimed it. 00:06:35.790 [2024-12-06 18:03:47.766924] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.356 ERROR: process (pid: 59658) is no longer running 00:06:36.356 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59658) - No such process 00:06:36.356 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.356 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:36.356 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:36.356 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.356 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.356 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.356 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59642 00:06:36.356 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59642 00:06:36.356 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59642 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59642 ']' 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59642 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59642 00:06:36.615 killing process with pid 59642 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59642' 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59642 00:06:36.615 18:03:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59642 00:06:39.901 00:06:39.901 real 0m5.305s 00:06:39.901 user 0m5.595s 00:06:39.901 sys 0m0.788s 00:06:39.901 18:03:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.901 18:03:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.901 ************************************ 00:06:39.901 END TEST locking_app_on_locked_coremask 00:06:39.901 ************************************ 00:06:39.901 18:03:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.901 18:03:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.901 18:03:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.901 18:03:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.901 ************************************ 00:06:39.901 START TEST locking_overlapped_coremask 00:06:39.901 ************************************ 00:06:39.901 18:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:39.901 18:03:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59733 00:06:39.901 18:03:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.902 18:03:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59733 /var/tmp/spdk.sock 00:06:39.902 18:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59733 ']' 00:06:39.902 18:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.902 18:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.902 18:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.902 18:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.902 18:03:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.902 [2024-12-06 18:03:51.560867] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:39.902 [2024-12-06 18:03:51.561004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59733 ] 00:06:39.902 [2024-12-06 18:03:51.744399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.902 [2024-12-06 18:03:51.879421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.902 [2024-12-06 18:03:51.879575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.902 [2024-12-06 18:03:51.879662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59751 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59751 /var/tmp/spdk2.sock 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59751 /var/tmp/spdk2.sock 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59751 /var/tmp/spdk2.sock 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59751 ']' 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.837 18:03:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.837 [2024-12-06 18:03:52.999670] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:40.837 [2024-12-06 18:03:53.000044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:06:41.096 [2024-12-06 18:03:53.193070] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59733 has claimed it. 00:06:41.096 [2024-12-06 18:03:53.193153] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59751) - No such process 00:06:41.664 ERROR: process (pid: 59751) is no longer running 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59733 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59733 ']' 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59733 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59733 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.664 killing process with pid 59733 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59733' 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59733 00:06:41.664 18:03:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59733 00:06:44.985 00:06:44.985 real 0m5.037s 00:06:44.985 user 0m13.797s 00:06:44.985 sys 0m0.623s 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.985 ************************************ 00:06:44.985 END TEST locking_overlapped_coremask 00:06:44.985 ************************************ 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.985 18:03:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:44.985 18:03:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.985 18:03:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.985 18:03:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.985 ************************************ 00:06:44.985 START TEST locking_overlapped_coremask_via_rpc 00:06:44.985 ************************************ 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59826 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59826 /var/tmp/spdk.sock 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59826 ']' 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.985 18:03:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.985 [2024-12-06 18:03:56.692829] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:44.985 [2024-12-06 18:03:56.692975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59826 ] 00:06:44.985 [2024-12-06 18:03:56.871067] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.985 [2024-12-06 18:03:56.871136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.985 [2024-12-06 18:03:56.994901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.985 [2024-12-06 18:03:56.995044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.985 [2024-12-06 18:03:56.995108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59844 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59844 /var/tmp/spdk2.sock 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59844 ']' 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.919 18:03:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.177 [2024-12-06 18:03:58.120369] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:46.177 [2024-12-06 18:03:58.120505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59844 ] 00:06:46.177 [2024-12-06 18:03:58.303823] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.177 [2024-12-06 18:03:58.303907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.434 [2024-12-06 18:03:58.584506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.434 [2024-12-06 18:03:58.584588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.434 [2024-12-06 18:03:58.584550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 [2024-12-06 18:04:00.842335] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59826 has claimed it. 00:06:48.965 request: 00:06:48.965 { 00:06:48.965 "method": "framework_enable_cpumask_locks", 00:06:48.965 "req_id": 1 00:06:48.965 } 00:06:48.965 Got JSON-RPC error response 00:06:48.965 response: 00:06:48.965 { 00:06:48.965 "code": -32603, 00:06:48.965 "message": "Failed to claim CPU core: 2" 00:06:48.965 } 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59826 /var/tmp/spdk.sock 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59826 ']' 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.965 18:04:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.965 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.965 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59844 /var/tmp/spdk2.sock 00:06:48.965 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59844 ']' 00:06:48.965 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.965 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.965 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.965 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.965 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.222 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.222 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.222 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:49.222 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.222 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.222 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.222 00:06:49.222 real 0m4.821s 00:06:49.222 user 0m1.600s 00:06:49.222 sys 0m0.216s 00:06:49.222 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.222 18:04:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.222 ************************************ 00:06:49.222 END TEST locking_overlapped_coremask_via_rpc 00:06:49.222 ************************************ 00:06:49.479 18:04:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:49.479 18:04:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59826 ]] 00:06:49.479 18:04:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59826 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59826 ']' 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59826 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59826 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.479 killing process with pid 59826 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59826' 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59826 00:06:49.479 18:04:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59826 00:06:52.762 18:04:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59844 ]] 00:06:52.762 18:04:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59844 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59844 ']' 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59844 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59844 00:06:52.762 killing process with pid 59844 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59844' 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59844 00:06:52.762 18:04:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59844 00:06:55.289 18:04:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.289 Process with pid 59826 is not found 00:06:55.289 Process with pid 59844 is not found 00:06:55.289 18:04:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:55.289 18:04:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59826 ]] 00:06:55.289 18:04:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59826 00:06:55.289 18:04:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59826 ']' 00:06:55.289 18:04:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59826 00:06:55.289 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59826) - No such process 00:06:55.289 18:04:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59826 is not found' 00:06:55.289 18:04:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59844 ]] 00:06:55.289 18:04:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59844 00:06:55.290 18:04:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59844 ']' 00:06:55.290 18:04:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59844 00:06:55.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59844) - No such process 00:06:55.290 18:04:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59844 is not found' 00:06:55.290 18:04:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.290 00:06:55.290 real 0m56.454s 00:06:55.290 user 1m37.972s 00:06:55.290 sys 0m6.905s 00:06:55.290 18:04:07 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.290 ************************************ 00:06:55.290 END TEST cpu_locks 00:06:55.290 ************************************ 00:06:55.290 18:04:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.290 00:06:55.290 real 1m31.070s 00:06:55.290 user 2m49.397s 00:06:55.290 sys 0m11.487s 00:06:55.290 18:04:07 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.290 ************************************ 00:06:55.290 END TEST event 00:06:55.290 ************************************ 00:06:55.290 18:04:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.549 18:04:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.549 18:04:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.549 18:04:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.549 18:04:07 -- common/autotest_common.sh@10 -- # set +x 00:06:55.549 ************************************ 00:06:55.549 START TEST thread 00:06:55.549 ************************************ 00:06:55.549 18:04:07 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.549 * Looking for test storage... 00:06:55.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:55.549 18:04:07 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.549 18:04:07 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.549 18:04:07 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.549 18:04:07 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.549 18:04:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.549 18:04:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.549 18:04:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.549 18:04:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.549 18:04:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.549 18:04:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.549 18:04:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.549 18:04:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.549 18:04:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.549 18:04:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.549 18:04:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.549 18:04:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:55.549 18:04:07 thread -- scripts/common.sh@345 -- # : 1 00:06:55.549 18:04:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.549 18:04:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.549 18:04:07 thread -- scripts/common.sh@365 -- # decimal 1 00:06:55.549 18:04:07 thread -- scripts/common.sh@353 -- # local d=1 00:06:55.549 18:04:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.549 18:04:07 thread -- scripts/common.sh@355 -- # echo 1 00:06:55.549 18:04:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.549 18:04:07 thread -- scripts/common.sh@366 -- # decimal 2 00:06:55.549 18:04:07 thread -- scripts/common.sh@353 -- # local d=2 00:06:55.549 18:04:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.549 18:04:07 thread -- scripts/common.sh@355 -- # echo 2 00:06:55.549 18:04:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.549 18:04:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.549 18:04:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.549 18:04:07 thread -- scripts/common.sh@368 -- # return 0 00:06:55.549 18:04:07 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.549 18:04:07 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.549 --rc genhtml_branch_coverage=1 00:06:55.549 --rc genhtml_function_coverage=1 00:06:55.549 --rc genhtml_legend=1 00:06:55.549 --rc geninfo_all_blocks=1 00:06:55.549 --rc geninfo_unexecuted_blocks=1 00:06:55.550 00:06:55.550 ' 00:06:55.550 18:04:07 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.550 --rc genhtml_branch_coverage=1 00:06:55.550 --rc genhtml_function_coverage=1 00:06:55.550 --rc genhtml_legend=1 00:06:55.550 --rc geninfo_all_blocks=1 00:06:55.550 --rc geninfo_unexecuted_blocks=1 00:06:55.550 00:06:55.550 ' 00:06:55.550 18:04:07 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.550 --rc genhtml_branch_coverage=1 00:06:55.550 --rc genhtml_function_coverage=1 00:06:55.550 --rc genhtml_legend=1 00:06:55.550 --rc geninfo_all_blocks=1 00:06:55.550 --rc geninfo_unexecuted_blocks=1 00:06:55.550 00:06:55.550 ' 00:06:55.550 18:04:07 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.550 --rc genhtml_branch_coverage=1 00:06:55.550 --rc genhtml_function_coverage=1 00:06:55.550 --rc genhtml_legend=1 00:06:55.550 --rc geninfo_all_blocks=1 00:06:55.550 --rc geninfo_unexecuted_blocks=1 00:06:55.550 00:06:55.550 ' 00:06:55.550 18:04:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.550 18:04:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:55.550 18:04:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.550 18:04:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.550 ************************************ 00:06:55.550 START TEST thread_poller_perf 00:06:55.550 ************************************ 00:06:55.550 18:04:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.809 [2024-12-06 18:04:07.722526] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:55.809 [2024-12-06 18:04:07.722775] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:06:55.809 [2024-12-06 18:04:07.903883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.068 [2024-12-06 18:04:08.041006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.068 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:57.482 [2024-12-06T18:04:09.650Z] ====================================== 00:06:57.482 [2024-12-06T18:04:09.650Z] busy:2300392096 (cyc) 00:06:57.482 [2024-12-06T18:04:09.650Z] total_run_count: 320000 00:06:57.482 [2024-12-06T18:04:09.650Z] tsc_hz: 2290000000 (cyc) 00:06:57.482 [2024-12-06T18:04:09.650Z] ====================================== 00:06:57.482 [2024-12-06T18:04:09.650Z] poller_cost: 7188 (cyc), 3138 (nsec) 00:06:57.482 00:06:57.482 real 0m1.648s 00:06:57.482 user 0m1.437s 00:06:57.482 sys 0m0.101s 00:06:57.482 18:04:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.482 18:04:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.482 ************************************ 00:06:57.482 END TEST thread_poller_perf 00:06:57.482 ************************************ 00:06:57.482 18:04:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.482 18:04:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:57.482 18:04:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.482 18:04:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.482 ************************************ 00:06:57.482 START TEST thread_poller_perf 00:06:57.482 ************************************ 00:06:57.482 18:04:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.482 [2024-12-06 18:04:09.415441] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:57.482 [2024-12-06 18:04:09.415677] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60092 ] 00:06:57.482 [2024-12-06 18:04:09.597342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.739 [2024-12-06 18:04:09.733095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.739 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:59.111 [2024-12-06T18:04:11.279Z] ====================================== 00:06:59.111 [2024-12-06T18:04:11.279Z] busy:2294352764 (cyc) 00:06:59.111 [2024-12-06T18:04:11.279Z] total_run_count: 3862000 00:06:59.111 [2024-12-06T18:04:11.279Z] tsc_hz: 2290000000 (cyc) 00:06:59.111 [2024-12-06T18:04:11.279Z] ====================================== 00:06:59.111 [2024-12-06T18:04:11.279Z] poller_cost: 594 (cyc), 259 (nsec) 00:06:59.111 ************************************ 00:06:59.111 END TEST thread_poller_perf 00:06:59.111 ************************************ 00:06:59.111 00:06:59.111 real 0m1.618s 00:06:59.111 user 0m1.420s 00:06:59.111 sys 0m0.090s 00:06:59.111 18:04:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.111 18:04:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.111 18:04:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:59.111 00:06:59.111 real 0m3.540s 00:06:59.111 user 0m2.977s 00:06:59.111 sys 0m0.354s 00:06:59.111 ************************************ 00:06:59.111 END TEST thread 00:06:59.111 ************************************ 00:06:59.111 18:04:11 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.111 18:04:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.111 18:04:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:59.111 18:04:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:59.111 18:04:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.111 18:04:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.111 18:04:11 -- common/autotest_common.sh@10 -- # set +x 00:06:59.111 ************************************ 00:06:59.111 START TEST app_cmdline 00:06:59.111 ************************************ 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:59.111 * Looking for test storage... 00:06:59.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:59.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.111 18:04:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.111 --rc genhtml_branch_coverage=1 00:06:59.111 --rc genhtml_function_coverage=1 00:06:59.111 --rc genhtml_legend=1 00:06:59.111 --rc geninfo_all_blocks=1 00:06:59.111 --rc geninfo_unexecuted_blocks=1 00:06:59.111 00:06:59.111 ' 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.111 --rc genhtml_branch_coverage=1 00:06:59.111 --rc genhtml_function_coverage=1 00:06:59.111 --rc genhtml_legend=1 00:06:59.111 --rc geninfo_all_blocks=1 00:06:59.111 --rc geninfo_unexecuted_blocks=1 00:06:59.111 00:06:59.111 ' 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.111 --rc genhtml_branch_coverage=1 00:06:59.111 --rc genhtml_function_coverage=1 00:06:59.111 --rc genhtml_legend=1 00:06:59.111 --rc geninfo_all_blocks=1 00:06:59.111 --rc geninfo_unexecuted_blocks=1 00:06:59.111 00:06:59.111 ' 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.111 --rc genhtml_branch_coverage=1 00:06:59.111 --rc genhtml_function_coverage=1 00:06:59.111 --rc genhtml_legend=1 00:06:59.111 --rc geninfo_all_blocks=1 00:06:59.111 --rc geninfo_unexecuted_blocks=1 00:06:59.111 00:06:59.111 ' 00:06:59.111 18:04:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:59.111 18:04:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60176 00:06:59.111 18:04:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60176 00:06:59.111 18:04:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60176 ']' 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.111 18:04:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.369 [2024-12-06 18:04:11.412149] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:06:59.369 [2024-12-06 18:04:11.412372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60176 ] 00:06:59.626 [2024-12-06 18:04:11.594889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.626 [2024-12-06 18:04:11.731076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.998 18:04:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.998 18:04:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:00.998 18:04:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:00.998 { 00:07:00.998 "version": "SPDK v25.01-pre git sha1 0ea9ac02f", 00:07:00.998 "fields": { 00:07:00.998 "major": 25, 00:07:00.998 "minor": 1, 00:07:00.998 "patch": 0, 00:07:00.998 "suffix": "-pre", 00:07:00.998 "commit": "0ea9ac02f" 00:07:00.998 } 00:07:00.998 } 00:07:00.998 18:04:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:00.998 18:04:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:00.998 18:04:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:00.998 18:04:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:00.998 18:04:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:00.998 18:04:12 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.998 18:04:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.998 18:04:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:00.998 18:04:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:00.998 18:04:12 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.998 18:04:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:00.998 18:04:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:00.999 18:04:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:00.999 18:04:13 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.266 request: 00:07:01.266 { 00:07:01.266 "method": "env_dpdk_get_mem_stats", 00:07:01.266 "req_id": 1 00:07:01.266 } 00:07:01.266 Got JSON-RPC error response 00:07:01.266 response: 00:07:01.266 { 00:07:01.266 "code": -32601, 00:07:01.266 "message": "Method not found" 00:07:01.266 } 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.266 18:04:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60176 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60176 ']' 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60176 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60176 00:07:01.266 killing process with pid 60176 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60176' 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 60176 00:07:01.266 18:04:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 60176 00:07:04.562 ************************************ 00:07:04.562 END TEST app_cmdline 00:07:04.562 ************************************ 00:07:04.562 00:07:04.562 real 0m5.088s 00:07:04.562 user 0m5.533s 00:07:04.562 sys 0m0.630s 00:07:04.562 18:04:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.562 18:04:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.562 18:04:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:04.562 18:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.562 18:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.562 18:04:16 -- common/autotest_common.sh@10 -- # set +x 00:07:04.562 ************************************ 00:07:04.562 START TEST version 00:07:04.562 ************************************ 00:07:04.562 18:04:16 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:04.562 * Looking for test storage... 00:07:04.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:04.562 18:04:16 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.562 18:04:16 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.562 18:04:16 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.562 18:04:16 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.562 18:04:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.562 18:04:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.562 18:04:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.562 18:04:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.562 18:04:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.562 18:04:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.562 18:04:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.562 18:04:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.562 18:04:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.562 18:04:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.562 18:04:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.562 18:04:16 version -- scripts/common.sh@344 -- # case "$op" in 00:07:04.562 18:04:16 version -- scripts/common.sh@345 -- # : 1 00:07:04.562 18:04:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.562 18:04:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.562 18:04:16 version -- scripts/common.sh@365 -- # decimal 1 00:07:04.562 18:04:16 version -- scripts/common.sh@353 -- # local d=1 00:07:04.562 18:04:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.562 18:04:16 version -- scripts/common.sh@355 -- # echo 1 00:07:04.562 18:04:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.562 18:04:16 version -- scripts/common.sh@366 -- # decimal 2 00:07:04.562 18:04:16 version -- scripts/common.sh@353 -- # local d=2 00:07:04.562 18:04:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.562 18:04:16 version -- scripts/common.sh@355 -- # echo 2 00:07:04.562 18:04:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.562 18:04:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.562 18:04:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.562 18:04:16 version -- scripts/common.sh@368 -- # return 0 00:07:04.562 18:04:16 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.562 18:04:16 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.562 --rc genhtml_branch_coverage=1 00:07:04.562 --rc genhtml_function_coverage=1 00:07:04.562 --rc genhtml_legend=1 00:07:04.562 --rc geninfo_all_blocks=1 00:07:04.562 --rc geninfo_unexecuted_blocks=1 00:07:04.562 00:07:04.562 ' 00:07:04.562 18:04:16 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.562 --rc genhtml_branch_coverage=1 00:07:04.562 --rc genhtml_function_coverage=1 00:07:04.562 --rc genhtml_legend=1 00:07:04.562 --rc geninfo_all_blocks=1 00:07:04.563 --rc geninfo_unexecuted_blocks=1 00:07:04.563 00:07:04.563 ' 00:07:04.563 18:04:16 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.563 --rc genhtml_branch_coverage=1 00:07:04.563 --rc genhtml_function_coverage=1 00:07:04.563 --rc genhtml_legend=1 00:07:04.563 --rc geninfo_all_blocks=1 00:07:04.563 --rc geninfo_unexecuted_blocks=1 00:07:04.563 00:07:04.563 ' 00:07:04.563 18:04:16 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.563 --rc genhtml_branch_coverage=1 00:07:04.563 --rc genhtml_function_coverage=1 00:07:04.563 --rc genhtml_legend=1 00:07:04.563 --rc geninfo_all_blocks=1 00:07:04.563 --rc geninfo_unexecuted_blocks=1 00:07:04.563 00:07:04.563 ' 00:07:04.563 18:04:16 version -- app/version.sh@17 -- # get_header_version major 00:07:04.563 18:04:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:04.563 18:04:16 version -- app/version.sh@14 -- # cut -f2 00:07:04.563 18:04:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.563 18:04:16 version -- app/version.sh@17 -- # major=25 00:07:04.563 18:04:16 version -- app/version.sh@18 -- # get_header_version minor 00:07:04.563 18:04:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:04.563 18:04:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.563 18:04:16 version -- app/version.sh@14 -- # cut -f2 00:07:04.563 18:04:16 version -- app/version.sh@18 -- # minor=1 00:07:04.563 18:04:16 version -- app/version.sh@19 -- # get_header_version patch 00:07:04.563 18:04:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:04.563 18:04:16 version -- app/version.sh@14 -- # cut -f2 00:07:04.563 18:04:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.563 18:04:16 version -- app/version.sh@19 -- # patch=0 00:07:04.563 18:04:16 version -- app/version.sh@20 -- # get_header_version suffix 00:07:04.563 18:04:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:04.563 18:04:16 version -- app/version.sh@14 -- # cut -f2 00:07:04.563 18:04:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:04.563 18:04:16 version -- app/version.sh@20 -- # suffix=-pre 00:07:04.563 18:04:16 version -- app/version.sh@22 -- # version=25.1 00:07:04.563 18:04:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:04.563 18:04:16 version -- app/version.sh@28 -- # version=25.1rc0 00:07:04.563 18:04:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:04.563 18:04:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:04.563 18:04:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:04.563 18:04:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:04.563 00:07:04.563 real 0m0.295s 00:07:04.563 user 0m0.179s 00:07:04.563 sys 0m0.163s 00:07:04.563 ************************************ 00:07:04.563 END TEST version 00:07:04.563 ************************************ 00:07:04.563 18:04:16 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.563 18:04:16 version -- common/autotest_common.sh@10 -- # set +x 00:07:04.563 18:04:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:04.563 18:04:16 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:04.563 18:04:16 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:04.563 18:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.563 18:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.563 18:04:16 -- common/autotest_common.sh@10 -- # set +x 00:07:04.563 ************************************ 00:07:04.563 START TEST bdev_raid 00:07:04.563 ************************************ 00:07:04.563 18:04:16 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:04.563 * Looking for test storage... 00:07:04.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:04.563 18:04:16 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.563 18:04:16 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.563 18:04:16 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.822 18:04:16 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.822 18:04:16 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:04.822 18:04:16 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.822 18:04:16 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.822 --rc genhtml_branch_coverage=1 00:07:04.822 --rc genhtml_function_coverage=1 00:07:04.822 --rc genhtml_legend=1 00:07:04.822 --rc geninfo_all_blocks=1 00:07:04.822 --rc geninfo_unexecuted_blocks=1 00:07:04.822 00:07:04.822 ' 00:07:04.822 18:04:16 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.822 --rc genhtml_branch_coverage=1 00:07:04.822 --rc genhtml_function_coverage=1 00:07:04.822 --rc genhtml_legend=1 00:07:04.822 --rc geninfo_all_blocks=1 00:07:04.822 --rc geninfo_unexecuted_blocks=1 00:07:04.822 00:07:04.822 ' 00:07:04.822 18:04:16 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.822 --rc genhtml_branch_coverage=1 00:07:04.822 --rc genhtml_function_coverage=1 00:07:04.822 --rc genhtml_legend=1 00:07:04.822 --rc geninfo_all_blocks=1 00:07:04.822 --rc geninfo_unexecuted_blocks=1 00:07:04.822 00:07:04.822 ' 00:07:04.822 18:04:16 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.822 --rc genhtml_branch_coverage=1 00:07:04.822 --rc genhtml_function_coverage=1 00:07:04.822 --rc genhtml_legend=1 00:07:04.822 --rc geninfo_all_blocks=1 00:07:04.822 --rc geninfo_unexecuted_blocks=1 00:07:04.822 00:07:04.822 ' 00:07:04.822 18:04:16 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:04.822 18:04:16 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:04.822 18:04:16 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:04.822 18:04:16 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:04.822 18:04:16 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:04.822 18:04:16 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:04.822 18:04:16 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:04.822 18:04:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.822 18:04:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.822 18:04:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.822 ************************************ 00:07:04.822 START TEST raid1_resize_data_offset_test 00:07:04.822 ************************************ 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60379 00:07:04.822 Process raid pid: 60379 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60379' 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60379 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60379 ']' 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.822 18:04:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.823 [2024-12-06 18:04:16.862408] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:04.823 [2024-12-06 18:04:16.863117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.081 [2024-12-06 18:04:17.044725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.081 [2024-12-06 18:04:17.181573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.339 [2024-12-06 18:04:17.438045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.339 [2024-12-06 18:04:17.438112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.907 malloc0 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.907 malloc1 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.907 null0 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.907 [2024-12-06 18:04:17.960390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:05.907 [2024-12-06 18:04:17.962593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:05.907 [2024-12-06 18:04:17.962709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:05.907 [2024-12-06 18:04:17.962943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:05.907 [2024-12-06 18:04:17.963000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:05.907 [2024-12-06 18:04:17.963378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:05.907 [2024-12-06 18:04:17.963635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:05.907 [2024-12-06 18:04:17.963689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:05.907 [2024-12-06 18:04:17.963931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:05.907 18:04:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.907 18:04:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:05.907 18:04:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:05.907 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.907 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.907 [2024-12-06 18:04:18.016325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:05.907 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.907 18:04:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:05.907 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.907 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.525 malloc2 00:07:06.525 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.525 18:04:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:06.525 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.525 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.525 [2024-12-06 18:04:18.666777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:06.525 [2024-12-06 18:04:18.687428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:06.525 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.525 [2024-12-06 18:04:18.689548] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60379 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60379 ']' 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60379 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60379 00:07:06.783 killing process with pid 60379 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60379' 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60379 00:07:06.783 18:04:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60379 00:07:06.783 [2024-12-06 18:04:18.782232] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.783 [2024-12-06 18:04:18.784257] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:06.783 [2024-12-06 18:04:18.784333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.783 [2024-12-06 18:04:18.784353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:06.783 [2024-12-06 18:04:18.829058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.784 [2024-12-06 18:04:18.829445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.784 [2024-12-06 18:04:18.829466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:09.319 [2024-12-06 18:04:20.967699] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.256 18:04:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:10.256 00:07:10.256 real 0m5.520s 00:07:10.256 user 0m5.477s 00:07:10.256 sys 0m0.525s 00:07:10.256 ************************************ 00:07:10.256 END TEST raid1_resize_data_offset_test 00:07:10.256 ************************************ 00:07:10.256 18:04:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.256 18:04:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.256 18:04:22 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:10.256 18:04:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.256 18:04:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.256 18:04:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.256 ************************************ 00:07:10.256 START TEST raid0_resize_superblock_test 00:07:10.256 ************************************ 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60469 00:07:10.256 Process raid pid: 60469 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60469' 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60469 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60469 ']' 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.256 18:04:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.516 [2024-12-06 18:04:22.465429] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:10.516 [2024-12-06 18:04:22.465695] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.516 [2024-12-06 18:04:22.646612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.775 [2024-12-06 18:04:22.780558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.034 [2024-12-06 18:04:23.020072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.034 [2024-12-06 18:04:23.020140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.293 18:04:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.294 18:04:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:11.294 18:04:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:11.294 18:04:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.294 18:04:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.863 malloc0 00:07:11.863 18:04:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.863 18:04:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:11.863 18:04:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.863 18:04:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.863 [2024-12-06 18:04:23.996518] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:11.863 [2024-12-06 18:04:23.996601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.863 [2024-12-06 18:04:23.996630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:11.863 [2024-12-06 18:04:23.996644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.863 [2024-12-06 18:04:23.999251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.863 [2024-12-06 18:04:23.999301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:11.863 pt0 00:07:11.863 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.863 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:11.863 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.863 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 b0629a0a-d7dc-4975-a04b-3b5baa9f6793 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 9e2a07d0-3697-493e-952a-6bec4b0760fb 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 c61bfd95-dc01-4f04-a090-d4854ce6c181 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 [2024-12-06 18:04:24.132404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9e2a07d0-3697-493e-952a-6bec4b0760fb is claimed 00:07:12.123 [2024-12-06 18:04:24.132645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c61bfd95-dc01-4f04-a090-d4854ce6c181 is claimed 00:07:12.123 [2024-12-06 18:04:24.132836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:12.123 [2024-12-06 18:04:24.132856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:12.123 [2024-12-06 18:04:24.133230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:12.123 [2024-12-06 18:04:24.133473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:12.123 [2024-12-06 18:04:24.133495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:12.123 [2024-12-06 18:04:24.133716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 [2024-12-06 18:04:24.228513] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 [2024-12-06 18:04:24.276426] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.123 [2024-12-06 18:04:24.276520] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9e2a07d0-3697-493e-952a-6bec4b0760fb' was resized: old size 131072, new size 204800 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.123 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 [2024-12-06 18:04:24.288393] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.124 [2024-12-06 18:04:24.288511] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c61bfd95-dc01-4f04-a090-d4854ce6c181' was resized: old size 131072, new size 204800 00:07:12.124 [2024-12-06 18:04:24.288589] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.398 [2024-12-06 18:04:24.396245] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.398 [2024-12-06 18:04:24.443899] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:12.398 [2024-12-06 18:04:24.444074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:12.398 [2024-12-06 18:04:24.444126] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.398 [2024-12-06 18:04:24.444174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:12.398 [2024-12-06 18:04:24.444341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.398 [2024-12-06 18:04:24.444418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.398 [2024-12-06 18:04:24.444479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.398 [2024-12-06 18:04:24.455782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:12.398 [2024-12-06 18:04:24.455936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.398 [2024-12-06 18:04:24.455982] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:12.398 [2024-12-06 18:04:24.456031] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.398 [2024-12-06 18:04:24.458760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.398 [2024-12-06 18:04:24.458870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:12.398 pt0 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:12.398 [2024-12-06 18:04:24.461044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9e2a07d0-3697-493e-952a-6bec4b0760fb 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.398 [2024-12-06 18:04:24.461200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9e2a07d0-3697-493e-952a-6bec4b0760fb is claimed 00:07:12.398 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.398 [2024-12-06 18:04:24.461400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c61bfd95-dc01-4f04-a090-d4854ce6c181 00:07:12.398 [2024-12-06 18:04:24.461482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c61bfd95-dc01-4f04-a090-d4854ce6c181 is claimed 00:07:12.398 [2024-12-06 18:04:24.461715] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c61bfd95-dc01-4f04-a090-d4854ce6c181 (2) smaller than existing raid bdev Raid (3) 00:07:12.398 [2024-12-06 18:04:24.461792] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9e2a07d0-3697-493e-952a-6bec4b0760fb: File exists 00:07:12.398 [2024-12-06 18:04:24.461877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:12.398 [2024-12-06 18:04:24.461914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:12.398 [2024-12-06 18:04:24.462238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:12.398 [2024-12-06 18:04:24.462434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:12.398 [2024-12-06 18:04:24.462445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:12.399 [2024-12-06 18:04:24.462634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:12.399 [2024-12-06 18:04:24.480107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60469 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60469 ']' 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60469 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60469 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60469' 00:07:12.399 killing process with pid 60469 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60469 00:07:12.399 [2024-12-06 18:04:24.550127] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.399 18:04:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60469 00:07:12.399 [2024-12-06 18:04:24.550289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.399 [2024-12-06 18:04:24.550380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.399 [2024-12-06 18:04:24.550431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:14.303 [2024-12-06 18:04:26.264880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.678 18:04:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:15.678 00:07:15.678 real 0m5.240s 00:07:15.678 user 0m5.486s 00:07:15.678 sys 0m0.569s 00:07:15.678 ************************************ 00:07:15.678 END TEST raid0_resize_superblock_test 00:07:15.678 ************************************ 00:07:15.678 18:04:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.678 18:04:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.678 18:04:27 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:15.678 18:04:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.678 18:04:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.678 18:04:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.678 ************************************ 00:07:15.678 START TEST raid1_resize_superblock_test 00:07:15.678 ************************************ 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60573 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60573' 00:07:15.678 Process raid pid: 60573 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60573 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60573 ']' 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.678 18:04:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.678 [2024-12-06 18:04:27.761693] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:15.678 [2024-12-06 18:04:27.761938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.936 [2024-12-06 18:04:27.945268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.936 [2024-12-06 18:04:28.082675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.195 [2024-12-06 18:04:28.323957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.195 [2024-12-06 18:04:28.324171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.762 18:04:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.762 18:04:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.762 18:04:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:16.762 18:04:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.762 18:04:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.330 malloc0 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.330 [2024-12-06 18:04:29.331467] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:17.330 [2024-12-06 18:04:29.331637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.330 [2024-12-06 18:04:29.331702] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:17.330 [2024-12-06 18:04:29.331743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.330 [2024-12-06 18:04:29.334343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.330 [2024-12-06 18:04:29.334439] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:17.330 pt0 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.330 ba5d47bc-cbb1-4e0e-8def-26fb3112764c 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.330 e2a92024-51d3-4bed-9b15-a3e52df3659e 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:17.330 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.331 8251e43d-9f77-467c-b060-8e065a07264d 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.331 [2024-12-06 18:04:29.467653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e2a92024-51d3-4bed-9b15-a3e52df3659e is claimed 00:07:17.331 [2024-12-06 18:04:29.467811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8251e43d-9f77-467c-b060-8e065a07264d is claimed 00:07:17.331 [2024-12-06 18:04:29.468009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:17.331 [2024-12-06 18:04:29.468030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:17.331 [2024-12-06 18:04:29.468408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:17.331 [2024-12-06 18:04:29.468672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:17.331 [2024-12-06 18:04:29.468692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:17.331 [2024-12-06 18:04:29.468923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.331 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.591 [2024-12-06 18:04:29.583743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.591 [2024-12-06 18:04:29.615640] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.591 [2024-12-06 18:04:29.615753] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e2a92024-51d3-4bed-9b15-a3e52df3659e' was resized: old size 131072, new size 204800 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.591 [2024-12-06 18:04:29.627550] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.591 [2024-12-06 18:04:29.627589] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8251e43d-9f77-467c-b060-8e065a07264d' was resized: old size 131072, new size 204800 00:07:17.591 [2024-12-06 18:04:29.627637] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:17.591 [2024-12-06 18:04:29.735452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.591 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.850 [2024-12-06 18:04:29.783123] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:17.850 [2024-12-06 18:04:29.783285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:17.850 [2024-12-06 18:04:29.783341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:17.850 [2024-12-06 18:04:29.783549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.850 [2024-12-06 18:04:29.783842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.850 [2024-12-06 18:04:29.783969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.850 [2024-12-06 18:04:29.784033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.850 [2024-12-06 18:04:29.794996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:17.850 [2024-12-06 18:04:29.795163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.850 [2024-12-06 18:04:29.795209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:17.850 [2024-12-06 18:04:29.795257] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.850 [2024-12-06 18:04:29.797893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.850 [2024-12-06 18:04:29.798006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:17.850 pt0 00:07:17.850 [2024-12-06 18:04:29.800160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e2a92024-51d3-4bed- 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.850 9b15-a3e52df3659e 00:07:17.850 [2024-12-06 18:04:29.800323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e2a92024-51d3-4bed-9b15-a3e52df3659e is claimed 00:07:17.850 [2024-12-06 18:04:29.800522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8251e43d-9f77-467c-b060-8e065a07264d 00:07:17.850 [2024-12-06 18:04:29.800593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8251e43d-9f77-467c-b060-8e065a07264d is claimed 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:17.850 [2024-12-06 18:04:29.800796] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8251e43d-9f77-467c-b060-8e065a07264d (2) smaller than existing raid bdev Raid (3) 00:07:17.850 [2024-12-06 18:04:29.800885] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev e2a92024-51d3-4bed-9b15-a3e52df3659e: File exists 00:07:17.850 [2024-12-06 18:04:29.800969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:17.850 [2024-12-06 18:04:29.801011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.850 [2024-12-06 18:04:29.801341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:17.850 [2024-12-06 18:04:29.801535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:17.850 [2024-12-06 18:04:29.801591] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.850 [2024-12-06 18:04:29.801827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.850 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.851 [2024-12-06 18:04:29.819315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60573 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60573 ']' 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60573 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60573 00:07:17.851 killing process with pid 60573 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60573' 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60573 00:07:17.851 [2024-12-06 18:04:29.884728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.851 [2024-12-06 18:04:29.884832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.851 18:04:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60573 00:07:17.851 [2024-12-06 18:04:29.884895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.851 [2024-12-06 18:04:29.884907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:19.781 [2024-12-06 18:04:31.563731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.161 18:04:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:21.161 00:07:21.161 real 0m5.241s 00:07:21.161 user 0m5.491s 00:07:21.161 sys 0m0.582s 00:07:21.161 ************************************ 00:07:21.161 END TEST raid1_resize_superblock_test 00:07:21.161 ************************************ 00:07:21.161 18:04:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.161 18:04:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.161 18:04:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:21.161 18:04:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:21.161 18:04:32 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:21.161 18:04:32 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:21.161 18:04:32 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:21.161 18:04:32 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:21.161 18:04:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.161 18:04:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.161 18:04:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.161 ************************************ 00:07:21.161 START TEST raid_function_test_raid0 00:07:21.161 ************************************ 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60681 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60681' 00:07:21.161 Process raid pid: 60681 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60681 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60681 ']' 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.161 18:04:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:21.161 [2024-12-06 18:04:33.074647] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:21.161 [2024-12-06 18:04:33.074877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.161 [2024-12-06 18:04:33.253972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.504 [2024-12-06 18:04:33.391884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.504 [2024-12-06 18:04:33.640460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.504 [2024-12-06 18:04:33.640615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.073 Base_1 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.073 Base_2 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.073 [2024-12-06 18:04:34.124449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:22.073 [2024-12-06 18:04:34.126727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:22.073 [2024-12-06 18:04:34.126890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:22.073 [2024-12-06 18:04:34.126910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:22.073 [2024-12-06 18:04:34.127281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.073 [2024-12-06 18:04:34.127467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:22.073 [2024-12-06 18:04:34.127478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:22.073 [2024-12-06 18:04:34.127690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:22.073 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:22.330 [2024-12-06 18:04:34.443999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:22.330 /dev/nbd0 00:07:22.330 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.330 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.330 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:22.330 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:22.330 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.330 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.330 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:22.330 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:22.331 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.589 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.589 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.589 1+0 records in 00:07:22.589 1+0 records out 00:07:22.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629873 s, 6.5 MB/s 00:07:22.589 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.589 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:22.589 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.589 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.589 18:04:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:22.589 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.590 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:22.590 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:22.590 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:22.590 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.849 { 00:07:22.849 "nbd_device": "/dev/nbd0", 00:07:22.849 "bdev_name": "raid" 00:07:22.849 } 00:07:22.849 ]' 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.849 { 00:07:22.849 "nbd_device": "/dev/nbd0", 00:07:22.849 "bdev_name": "raid" 00:07:22.849 } 00:07:22.849 ]' 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:22.849 4096+0 records in 00:07:22.849 4096+0 records out 00:07:22.849 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0290717 s, 72.1 MB/s 00:07:22.849 18:04:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:23.106 4096+0 records in 00:07:23.106 4096+0 records out 00:07:23.106 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.284567 s, 7.4 MB/s 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:23.106 128+0 records in 00:07:23.106 128+0 records out 00:07:23.106 65536 bytes (66 kB, 64 KiB) copied, 0.000622529 s, 105 MB/s 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:23.106 2035+0 records in 00:07:23.106 2035+0 records out 00:07:23.106 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00788654 s, 132 MB/s 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:23.106 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:23.107 456+0 records in 00:07:23.107 456+0 records out 00:07:23.107 233472 bytes (233 kB, 228 KiB) copied, 0.00378237 s, 61.7 MB/s 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.107 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.671 [2024-12-06 18:04:35.584337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.671 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60681 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60681 ']' 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60681 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60681 00:07:23.929 killing process with pid 60681 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60681' 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60681 00:07:23.929 [2024-12-06 18:04:35.955584] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.929 [2024-12-06 18:04:35.955718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.929 18:04:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60681 00:07:23.929 [2024-12-06 18:04:35.955779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.929 [2024-12-06 18:04:35.955797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:24.187 [2024-12-06 18:04:36.205633] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.590 18:04:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:25.590 00:07:25.590 real 0m4.560s 00:07:25.590 user 0m5.392s 00:07:25.590 sys 0m1.057s 00:07:25.590 18:04:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.590 18:04:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:25.590 ************************************ 00:07:25.590 END TEST raid_function_test_raid0 00:07:25.590 ************************************ 00:07:25.590 18:04:37 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:25.590 18:04:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.590 18:04:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.590 18:04:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.590 ************************************ 00:07:25.590 START TEST raid_function_test_concat 00:07:25.590 ************************************ 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60818 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60818' 00:07:25.590 Process raid pid: 60818 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60818 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60818 ']' 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.590 18:04:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:25.590 [2024-12-06 18:04:37.692155] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:25.590 [2024-12-06 18:04:37.692299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.872 [2024-12-06 18:04:37.872056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.872 [2024-12-06 18:04:38.010079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.129 [2024-12-06 18:04:38.256141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.129 [2024-12-06 18:04:38.256193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.695 Base_1 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.695 Base_2 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.695 [2024-12-06 18:04:38.709846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:26.695 [2024-12-06 18:04:38.712073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:26.695 [2024-12-06 18:04:38.712182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:26.695 [2024-12-06 18:04:38.712197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:26.695 [2024-12-06 18:04:38.712540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.695 [2024-12-06 18:04:38.712743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:26.695 [2024-12-06 18:04:38.712762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:26.695 [2024-12-06 18:04:38.712968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.695 18:04:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:26.696 18:04:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.696 18:04:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:26.696 18:04:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.696 18:04:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:26.696 18:04:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:26.953 [2024-12-06 18:04:38.997441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:26.953 /dev/nbd0 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.953 1+0 records in 00:07:26.953 1+0 records out 00:07:26.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514709 s, 8.0 MB/s 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:26.953 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:27.211 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:27.211 { 00:07:27.212 "nbd_device": "/dev/nbd0", 00:07:27.212 "bdev_name": "raid" 00:07:27.212 } 00:07:27.212 ]' 00:07:27.212 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:27.212 { 00:07:27.212 "nbd_device": "/dev/nbd0", 00:07:27.212 "bdev_name": "raid" 00:07:27.212 } 00:07:27.212 ]' 00:07:27.212 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:27.470 4096+0 records in 00:07:27.470 4096+0 records out 00:07:27.470 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0354527 s, 59.2 MB/s 00:07:27.470 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:27.728 4096+0 records in 00:07:27.728 4096+0 records out 00:07:27.728 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.274045 s, 7.7 MB/s 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:27.728 128+0 records in 00:07:27.728 128+0 records out 00:07:27.728 65536 bytes (66 kB, 64 KiB) copied, 0.000575768 s, 114 MB/s 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:27.728 2035+0 records in 00:07:27.728 2035+0 records out 00:07:27.728 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0144135 s, 72.3 MB/s 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:27.728 456+0 records in 00:07:27.728 456+0 records out 00:07:27.728 233472 bytes (233 kB, 228 KiB) copied, 0.00188159 s, 124 MB/s 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.728 18:04:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:27.986 [2024-12-06 18:04:40.106474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:27.986 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:28.247 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.247 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.247 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.247 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60818 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60818 ']' 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60818 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60818 00:07:28.506 killing process with pid 60818 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60818' 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60818 00:07:28.506 [2024-12-06 18:04:40.456432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.506 18:04:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60818 00:07:28.506 [2024-12-06 18:04:40.456550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.506 [2024-12-06 18:04:40.456612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.506 [2024-12-06 18:04:40.456676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:28.765 [2024-12-06 18:04:40.707136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.145 ************************************ 00:07:30.145 END TEST raid_function_test_concat 00:07:30.145 ************************************ 00:07:30.145 18:04:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:30.145 00:07:30.145 real 0m4.445s 00:07:30.145 user 0m5.249s 00:07:30.145 sys 0m1.003s 00:07:30.145 18:04:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.145 18:04:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:30.145 18:04:42 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:30.145 18:04:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.145 18:04:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.145 18:04:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.145 ************************************ 00:07:30.145 START TEST raid0_resize_test 00:07:30.145 ************************************ 00:07:30.145 Process raid pid: 60947 00:07:30.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60947 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60947' 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60947 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60947 ']' 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.145 18:04:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.145 [2024-12-06 18:04:42.179284] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:30.145 [2024-12-06 18:04:42.179565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.404 [2024-12-06 18:04:42.358894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.404 [2024-12-06 18:04:42.499767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.664 [2024-12-06 18:04:42.752551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.664 [2024-12-06 18:04:42.752705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.232 Base_1 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.232 Base_2 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.232 [2024-12-06 18:04:43.142947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:31.232 [2024-12-06 18:04:43.145240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:31.232 [2024-12-06 18:04:43.145395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:31.232 [2024-12-06 18:04:43.145455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:31.232 [2024-12-06 18:04:43.145840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:31.232 [2024-12-06 18:04:43.146058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:31.232 [2024-12-06 18:04:43.146132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:31.232 [2024-12-06 18:04:43.146425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.232 [2024-12-06 18:04:43.154931] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.232 [2024-12-06 18:04:43.155060] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:31.232 true 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.232 [2024-12-06 18:04:43.171107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.232 [2024-12-06 18:04:43.214802] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.232 [2024-12-06 18:04:43.214840] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:31.232 [2024-12-06 18:04:43.214880] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:31.232 true 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:31.232 [2024-12-06 18:04:43.226985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60947 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60947 ']' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60947 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60947 00:07:31.232 killing process with pid 60947 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60947' 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60947 00:07:31.232 18:04:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60947 00:07:31.232 [2024-12-06 18:04:43.307673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.232 [2024-12-06 18:04:43.307783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.232 [2024-12-06 18:04:43.307849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.232 [2024-12-06 18:04:43.307869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:31.232 [2024-12-06 18:04:43.329459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.611 18:04:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:32.611 00:07:32.611 real 0m2.598s 00:07:32.611 user 0m2.780s 00:07:32.611 sys 0m0.387s 00:07:32.611 18:04:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.611 ************************************ 00:07:32.611 END TEST raid0_resize_test 00:07:32.611 ************************************ 00:07:32.611 18:04:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.611 18:04:44 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:32.611 18:04:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.611 18:04:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.611 18:04:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.611 ************************************ 00:07:32.611 START TEST raid1_resize_test 00:07:32.611 ************************************ 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61014 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61014' 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.611 Process raid pid: 61014 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61014 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 61014 ']' 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.611 18:04:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.871 [2024-12-06 18:04:44.830102] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:32.871 [2024-12-06 18:04:44.830355] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.871 [2024-12-06 18:04:45.011312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.130 [2024-12-06 18:04:45.149077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.388 [2024-12-06 18:04:45.390234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.388 [2024-12-06 18:04:45.390383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.648 Base_1 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.648 Base_2 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.648 [2024-12-06 18:04:45.800447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:33.648 [2024-12-06 18:04:45.802717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:33.648 [2024-12-06 18:04:45.802812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:33.648 [2024-12-06 18:04:45.802825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:33.648 [2024-12-06 18:04:45.803196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:33.648 [2024-12-06 18:04:45.803355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:33.648 [2024-12-06 18:04:45.803366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:33.648 [2024-12-06 18:04:45.803571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.648 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.648 [2024-12-06 18:04:45.812445] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:33.648 [2024-12-06 18:04:45.812577] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:33.907 true 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.907 [2024-12-06 18:04:45.828602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.907 [2024-12-06 18:04:45.876314] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:33.907 [2024-12-06 18:04:45.876363] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:33.907 [2024-12-06 18:04:45.876410] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:33.907 true 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.907 [2024-12-06 18:04:45.888494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:33.907 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61014 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 61014 ']' 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 61014 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61014 00:07:33.908 killing process with pid 61014 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61014' 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 61014 00:07:33.908 [2024-12-06 18:04:45.954351] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.908 [2024-12-06 18:04:45.954462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.908 18:04:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 61014 00:07:33.908 [2024-12-06 18:04:45.955039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.908 [2024-12-06 18:04:45.955082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:33.908 [2024-12-06 18:04:45.976025] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.288 18:04:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:35.288 00:07:35.288 real 0m2.574s 00:07:35.288 user 0m2.808s 00:07:35.288 sys 0m0.335s 00:07:35.288 18:04:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.288 18:04:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.288 ************************************ 00:07:35.288 END TEST raid1_resize_test 00:07:35.288 ************************************ 00:07:35.288 18:04:47 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:35.288 18:04:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:35.288 18:04:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:35.288 18:04:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.288 18:04:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.288 18:04:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.288 ************************************ 00:07:35.288 START TEST raid_state_function_test 00:07:35.288 ************************************ 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.288 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61077 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.289 Process raid pid: 61077 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61077' 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61077 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61077 ']' 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.289 18:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.548 [2024-12-06 18:04:47.485773] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:35.548 [2024-12-06 18:04:47.485909] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.548 [2024-12-06 18:04:47.651433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.807 [2024-12-06 18:04:47.793171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.066 [2024-12-06 18:04:48.043552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.066 [2024-12-06 18:04:48.043619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.326 [2024-12-06 18:04:48.435015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.326 [2024-12-06 18:04:48.435114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.326 [2024-12-06 18:04:48.435128] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.326 [2024-12-06 18:04:48.435140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.326 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.326 "name": "Existed_Raid", 00:07:36.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.327 "strip_size_kb": 64, 00:07:36.327 "state": "configuring", 00:07:36.327 "raid_level": "raid0", 00:07:36.327 "superblock": false, 00:07:36.327 "num_base_bdevs": 2, 00:07:36.327 "num_base_bdevs_discovered": 0, 00:07:36.327 "num_base_bdevs_operational": 2, 00:07:36.327 "base_bdevs_list": [ 00:07:36.327 { 00:07:36.327 "name": "BaseBdev1", 00:07:36.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.327 "is_configured": false, 00:07:36.327 "data_offset": 0, 00:07:36.327 "data_size": 0 00:07:36.327 }, 00:07:36.327 { 00:07:36.327 "name": "BaseBdev2", 00:07:36.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.327 "is_configured": false, 00:07:36.327 "data_offset": 0, 00:07:36.327 "data_size": 0 00:07:36.327 } 00:07:36.327 ] 00:07:36.327 }' 00:07:36.327 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.327 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.894 [2024-12-06 18:04:48.914155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.894 [2024-12-06 18:04:48.914274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.894 [2024-12-06 18:04:48.926169] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.894 [2024-12-06 18:04:48.926305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.894 [2024-12-06 18:04:48.926344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.894 [2024-12-06 18:04:48.926390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.894 [2024-12-06 18:04:48.981189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.894 BaseBdev1 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.894 18:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.894 [ 00:07:36.894 { 00:07:36.894 "name": "BaseBdev1", 00:07:36.894 "aliases": [ 00:07:36.894 "5a4f77db-49a6-4dac-964a-3a4a3f9d24b6" 00:07:36.894 ], 00:07:36.894 "product_name": "Malloc disk", 00:07:36.894 "block_size": 512, 00:07:36.894 "num_blocks": 65536, 00:07:36.894 "uuid": "5a4f77db-49a6-4dac-964a-3a4a3f9d24b6", 00:07:36.894 "assigned_rate_limits": { 00:07:36.894 "rw_ios_per_sec": 0, 00:07:36.894 "rw_mbytes_per_sec": 0, 00:07:36.894 "r_mbytes_per_sec": 0, 00:07:36.894 "w_mbytes_per_sec": 0 00:07:36.894 }, 00:07:36.894 "claimed": true, 00:07:36.894 "claim_type": "exclusive_write", 00:07:36.894 "zoned": false, 00:07:36.894 "supported_io_types": { 00:07:36.894 "read": true, 00:07:36.894 "write": true, 00:07:36.894 "unmap": true, 00:07:36.894 "flush": true, 00:07:36.894 "reset": true, 00:07:36.894 "nvme_admin": false, 00:07:36.894 "nvme_io": false, 00:07:36.894 "nvme_io_md": false, 00:07:36.894 "write_zeroes": true, 00:07:36.894 "zcopy": true, 00:07:36.894 "get_zone_info": false, 00:07:36.894 "zone_management": false, 00:07:36.894 "zone_append": false, 00:07:36.894 "compare": false, 00:07:36.894 "compare_and_write": false, 00:07:36.894 "abort": true, 00:07:36.894 "seek_hole": false, 00:07:36.894 "seek_data": false, 00:07:36.894 "copy": true, 00:07:36.894 "nvme_iov_md": false 00:07:36.894 }, 00:07:36.894 "memory_domains": [ 00:07:36.894 { 00:07:36.894 "dma_device_id": "system", 00:07:36.894 "dma_device_type": 1 00:07:36.894 }, 00:07:36.894 { 00:07:36.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.894 "dma_device_type": 2 00:07:36.894 } 00:07:36.894 ], 00:07:36.894 "driver_specific": {} 00:07:36.894 } 00:07:36.894 ] 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.894 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.895 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.895 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.895 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.895 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.895 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.895 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.895 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.153 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.153 "name": "Existed_Raid", 00:07:37.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.153 "strip_size_kb": 64, 00:07:37.153 "state": "configuring", 00:07:37.153 "raid_level": "raid0", 00:07:37.153 "superblock": false, 00:07:37.153 "num_base_bdevs": 2, 00:07:37.153 "num_base_bdevs_discovered": 1, 00:07:37.153 "num_base_bdevs_operational": 2, 00:07:37.153 "base_bdevs_list": [ 00:07:37.153 { 00:07:37.153 "name": "BaseBdev1", 00:07:37.153 "uuid": "5a4f77db-49a6-4dac-964a-3a4a3f9d24b6", 00:07:37.153 "is_configured": true, 00:07:37.153 "data_offset": 0, 00:07:37.153 "data_size": 65536 00:07:37.153 }, 00:07:37.153 { 00:07:37.153 "name": "BaseBdev2", 00:07:37.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.153 "is_configured": false, 00:07:37.153 "data_offset": 0, 00:07:37.153 "data_size": 0 00:07:37.153 } 00:07:37.153 ] 00:07:37.153 }' 00:07:37.153 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.153 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.410 [2024-12-06 18:04:49.468445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.410 [2024-12-06 18:04:49.468585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.410 [2024-12-06 18:04:49.480511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.410 [2024-12-06 18:04:49.482770] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.410 [2024-12-06 18:04:49.482888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.410 "name": "Existed_Raid", 00:07:37.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.410 "strip_size_kb": 64, 00:07:37.410 "state": "configuring", 00:07:37.410 "raid_level": "raid0", 00:07:37.410 "superblock": false, 00:07:37.410 "num_base_bdevs": 2, 00:07:37.410 "num_base_bdevs_discovered": 1, 00:07:37.410 "num_base_bdevs_operational": 2, 00:07:37.410 "base_bdevs_list": [ 00:07:37.410 { 00:07:37.410 "name": "BaseBdev1", 00:07:37.410 "uuid": "5a4f77db-49a6-4dac-964a-3a4a3f9d24b6", 00:07:37.410 "is_configured": true, 00:07:37.410 "data_offset": 0, 00:07:37.410 "data_size": 65536 00:07:37.410 }, 00:07:37.410 { 00:07:37.410 "name": "BaseBdev2", 00:07:37.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.410 "is_configured": false, 00:07:37.410 "data_offset": 0, 00:07:37.410 "data_size": 0 00:07:37.410 } 00:07:37.410 ] 00:07:37.410 }' 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.410 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.978 [2024-12-06 18:04:49.966969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.978 [2024-12-06 18:04:49.967132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.978 [2024-12-06 18:04:49.967167] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:37.978 [2024-12-06 18:04:49.967522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.978 [2024-12-06 18:04:49.967802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.978 [2024-12-06 18:04:49.967861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:37.978 [2024-12-06 18:04:49.968258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.978 BaseBdev2 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.978 [ 00:07:37.978 { 00:07:37.978 "name": "BaseBdev2", 00:07:37.978 "aliases": [ 00:07:37.978 "24339092-ba1f-4365-b763-e4bea4cf8d67" 00:07:37.978 ], 00:07:37.978 "product_name": "Malloc disk", 00:07:37.978 "block_size": 512, 00:07:37.978 "num_blocks": 65536, 00:07:37.978 "uuid": "24339092-ba1f-4365-b763-e4bea4cf8d67", 00:07:37.978 "assigned_rate_limits": { 00:07:37.978 "rw_ios_per_sec": 0, 00:07:37.978 "rw_mbytes_per_sec": 0, 00:07:37.978 "r_mbytes_per_sec": 0, 00:07:37.978 "w_mbytes_per_sec": 0 00:07:37.978 }, 00:07:37.978 "claimed": true, 00:07:37.978 "claim_type": "exclusive_write", 00:07:37.978 "zoned": false, 00:07:37.978 "supported_io_types": { 00:07:37.978 "read": true, 00:07:37.978 "write": true, 00:07:37.978 "unmap": true, 00:07:37.978 "flush": true, 00:07:37.978 "reset": true, 00:07:37.978 "nvme_admin": false, 00:07:37.978 "nvme_io": false, 00:07:37.978 "nvme_io_md": false, 00:07:37.978 "write_zeroes": true, 00:07:37.978 "zcopy": true, 00:07:37.978 "get_zone_info": false, 00:07:37.978 "zone_management": false, 00:07:37.978 "zone_append": false, 00:07:37.978 "compare": false, 00:07:37.978 "compare_and_write": false, 00:07:37.978 "abort": true, 00:07:37.978 "seek_hole": false, 00:07:37.978 "seek_data": false, 00:07:37.978 "copy": true, 00:07:37.978 "nvme_iov_md": false 00:07:37.978 }, 00:07:37.978 "memory_domains": [ 00:07:37.978 { 00:07:37.978 "dma_device_id": "system", 00:07:37.978 "dma_device_type": 1 00:07:37.978 }, 00:07:37.978 { 00:07:37.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.978 "dma_device_type": 2 00:07:37.978 } 00:07:37.978 ], 00:07:37.978 "driver_specific": {} 00:07:37.978 } 00:07:37.978 ] 00:07:37.978 18:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.978 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.979 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.979 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.979 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.979 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.979 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.979 "name": "Existed_Raid", 00:07:37.979 "uuid": "48439af2-7ddb-47be-8781-3bea1c3c1d3f", 00:07:37.979 "strip_size_kb": 64, 00:07:37.979 "state": "online", 00:07:37.979 "raid_level": "raid0", 00:07:37.979 "superblock": false, 00:07:37.979 "num_base_bdevs": 2, 00:07:37.979 "num_base_bdevs_discovered": 2, 00:07:37.979 "num_base_bdevs_operational": 2, 00:07:37.979 "base_bdevs_list": [ 00:07:37.979 { 00:07:37.979 "name": "BaseBdev1", 00:07:37.979 "uuid": "5a4f77db-49a6-4dac-964a-3a4a3f9d24b6", 00:07:37.979 "is_configured": true, 00:07:37.979 "data_offset": 0, 00:07:37.979 "data_size": 65536 00:07:37.979 }, 00:07:37.979 { 00:07:37.979 "name": "BaseBdev2", 00:07:37.979 "uuid": "24339092-ba1f-4365-b763-e4bea4cf8d67", 00:07:37.979 "is_configured": true, 00:07:37.979 "data_offset": 0, 00:07:37.979 "data_size": 65536 00:07:37.979 } 00:07:37.979 ] 00:07:37.979 }' 00:07:37.979 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.979 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.547 [2024-12-06 18:04:50.414625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.547 "name": "Existed_Raid", 00:07:38.547 "aliases": [ 00:07:38.547 "48439af2-7ddb-47be-8781-3bea1c3c1d3f" 00:07:38.547 ], 00:07:38.547 "product_name": "Raid Volume", 00:07:38.547 "block_size": 512, 00:07:38.547 "num_blocks": 131072, 00:07:38.547 "uuid": "48439af2-7ddb-47be-8781-3bea1c3c1d3f", 00:07:38.547 "assigned_rate_limits": { 00:07:38.547 "rw_ios_per_sec": 0, 00:07:38.547 "rw_mbytes_per_sec": 0, 00:07:38.547 "r_mbytes_per_sec": 0, 00:07:38.547 "w_mbytes_per_sec": 0 00:07:38.547 }, 00:07:38.547 "claimed": false, 00:07:38.547 "zoned": false, 00:07:38.547 "supported_io_types": { 00:07:38.547 "read": true, 00:07:38.547 "write": true, 00:07:38.547 "unmap": true, 00:07:38.547 "flush": true, 00:07:38.547 "reset": true, 00:07:38.547 "nvme_admin": false, 00:07:38.547 "nvme_io": false, 00:07:38.547 "nvme_io_md": false, 00:07:38.547 "write_zeroes": true, 00:07:38.547 "zcopy": false, 00:07:38.547 "get_zone_info": false, 00:07:38.547 "zone_management": false, 00:07:38.547 "zone_append": false, 00:07:38.547 "compare": false, 00:07:38.547 "compare_and_write": false, 00:07:38.547 "abort": false, 00:07:38.547 "seek_hole": false, 00:07:38.547 "seek_data": false, 00:07:38.547 "copy": false, 00:07:38.547 "nvme_iov_md": false 00:07:38.547 }, 00:07:38.547 "memory_domains": [ 00:07:38.547 { 00:07:38.547 "dma_device_id": "system", 00:07:38.547 "dma_device_type": 1 00:07:38.547 }, 00:07:38.547 { 00:07:38.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.547 "dma_device_type": 2 00:07:38.547 }, 00:07:38.547 { 00:07:38.547 "dma_device_id": "system", 00:07:38.547 "dma_device_type": 1 00:07:38.547 }, 00:07:38.547 { 00:07:38.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.547 "dma_device_type": 2 00:07:38.547 } 00:07:38.547 ], 00:07:38.547 "driver_specific": { 00:07:38.547 "raid": { 00:07:38.547 "uuid": "48439af2-7ddb-47be-8781-3bea1c3c1d3f", 00:07:38.547 "strip_size_kb": 64, 00:07:38.547 "state": "online", 00:07:38.547 "raid_level": "raid0", 00:07:38.547 "superblock": false, 00:07:38.547 "num_base_bdevs": 2, 00:07:38.547 "num_base_bdevs_discovered": 2, 00:07:38.547 "num_base_bdevs_operational": 2, 00:07:38.547 "base_bdevs_list": [ 00:07:38.547 { 00:07:38.547 "name": "BaseBdev1", 00:07:38.547 "uuid": "5a4f77db-49a6-4dac-964a-3a4a3f9d24b6", 00:07:38.547 "is_configured": true, 00:07:38.547 "data_offset": 0, 00:07:38.547 "data_size": 65536 00:07:38.547 }, 00:07:38.547 { 00:07:38.547 "name": "BaseBdev2", 00:07:38.547 "uuid": "24339092-ba1f-4365-b763-e4bea4cf8d67", 00:07:38.547 "is_configured": true, 00:07:38.547 "data_offset": 0, 00:07:38.547 "data_size": 65536 00:07:38.547 } 00:07:38.547 ] 00:07:38.547 } 00:07:38.547 } 00:07:38.547 }' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.547 BaseBdev2' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.547 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.547 [2024-12-06 18:04:50.673948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.547 [2024-12-06 18:04:50.674097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.547 [2024-12-06 18:04:50.674173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.806 "name": "Existed_Raid", 00:07:38.806 "uuid": "48439af2-7ddb-47be-8781-3bea1c3c1d3f", 00:07:38.806 "strip_size_kb": 64, 00:07:38.806 "state": "offline", 00:07:38.806 "raid_level": "raid0", 00:07:38.806 "superblock": false, 00:07:38.806 "num_base_bdevs": 2, 00:07:38.806 "num_base_bdevs_discovered": 1, 00:07:38.806 "num_base_bdevs_operational": 1, 00:07:38.806 "base_bdevs_list": [ 00:07:38.806 { 00:07:38.806 "name": null, 00:07:38.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.806 "is_configured": false, 00:07:38.806 "data_offset": 0, 00:07:38.806 "data_size": 65536 00:07:38.806 }, 00:07:38.806 { 00:07:38.806 "name": "BaseBdev2", 00:07:38.806 "uuid": "24339092-ba1f-4365-b763-e4bea4cf8d67", 00:07:38.806 "is_configured": true, 00:07:38.806 "data_offset": 0, 00:07:38.806 "data_size": 65536 00:07:38.806 } 00:07:38.806 ] 00:07:38.806 }' 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.806 18:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 [2024-12-06 18:04:51.283730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.369 [2024-12-06 18:04:51.283810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.369 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61077 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61077 ']' 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61077 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61077 00:07:39.370 killing process with pid 61077 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61077' 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61077 00:07:39.370 18:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61077 00:07:39.370 [2024-12-06 18:04:51.498808] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.370 [2024-12-06 18:04:51.518827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.743 18:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:40.743 00:07:40.743 real 0m5.474s 00:07:40.743 user 0m7.851s 00:07:40.743 sys 0m0.830s 00:07:40.743 18:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.743 18:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.743 ************************************ 00:07:40.743 END TEST raid_state_function_test 00:07:40.743 ************************************ 00:07:40.743 18:04:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:40.743 18:04:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.743 18:04:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.743 18:04:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.743 ************************************ 00:07:40.743 START TEST raid_state_function_test_sb 00:07:40.743 ************************************ 00:07:40.743 18:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:40.743 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:40.743 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61330 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61330' 00:07:41.002 Process raid pid: 61330 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61330 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61330 ']' 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.002 18:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.002 [2024-12-06 18:04:53.013300] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:41.002 [2024-12-06 18:04:53.013939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.261 [2024-12-06 18:04:53.180060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.261 [2024-12-06 18:04:53.340021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.520 [2024-12-06 18:04:53.589968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.520 [2024-12-06 18:04:53.590028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.778 18:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.778 18:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:41.778 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.779 18:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.779 18:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.779 [2024-12-06 18:04:53.942802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.779 [2024-12-06 18:04:53.942875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.779 [2024-12-06 18:04:53.942888] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.779 [2024-12-06 18:04:53.942900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.037 18:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.037 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:42.037 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.037 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.038 "name": "Existed_Raid", 00:07:42.038 "uuid": "3439a670-402f-4bb3-97dd-94618b53de9c", 00:07:42.038 "strip_size_kb": 64, 00:07:42.038 "state": "configuring", 00:07:42.038 "raid_level": "raid0", 00:07:42.038 "superblock": true, 00:07:42.038 "num_base_bdevs": 2, 00:07:42.038 "num_base_bdevs_discovered": 0, 00:07:42.038 "num_base_bdevs_operational": 2, 00:07:42.038 "base_bdevs_list": [ 00:07:42.038 { 00:07:42.038 "name": "BaseBdev1", 00:07:42.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.038 "is_configured": false, 00:07:42.038 "data_offset": 0, 00:07:42.038 "data_size": 0 00:07:42.038 }, 00:07:42.038 { 00:07:42.038 "name": "BaseBdev2", 00:07:42.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.038 "is_configured": false, 00:07:42.038 "data_offset": 0, 00:07:42.038 "data_size": 0 00:07:42.038 } 00:07:42.038 ] 00:07:42.038 }' 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.038 18:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 [2024-12-06 18:04:54.370282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.296 [2024-12-06 18:04:54.370339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 [2024-12-06 18:04:54.382292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.296 [2024-12-06 18:04:54.382361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.296 [2024-12-06 18:04:54.382373] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.296 [2024-12-06 18:04:54.382386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 [2024-12-06 18:04:54.436800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.296 BaseBdev1 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.296 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:42.297 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.297 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.297 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.297 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.297 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.297 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.297 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:42.297 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.297 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.555 [ 00:07:42.555 { 00:07:42.555 "name": "BaseBdev1", 00:07:42.555 "aliases": [ 00:07:42.555 "05cfd1f1-3290-4473-b806-f3e041324275" 00:07:42.555 ], 00:07:42.555 "product_name": "Malloc disk", 00:07:42.555 "block_size": 512, 00:07:42.555 "num_blocks": 65536, 00:07:42.555 "uuid": "05cfd1f1-3290-4473-b806-f3e041324275", 00:07:42.555 "assigned_rate_limits": { 00:07:42.555 "rw_ios_per_sec": 0, 00:07:42.555 "rw_mbytes_per_sec": 0, 00:07:42.555 "r_mbytes_per_sec": 0, 00:07:42.555 "w_mbytes_per_sec": 0 00:07:42.555 }, 00:07:42.555 "claimed": true, 00:07:42.555 "claim_type": "exclusive_write", 00:07:42.555 "zoned": false, 00:07:42.555 "supported_io_types": { 00:07:42.555 "read": true, 00:07:42.555 "write": true, 00:07:42.555 "unmap": true, 00:07:42.555 "flush": true, 00:07:42.555 "reset": true, 00:07:42.555 "nvme_admin": false, 00:07:42.555 "nvme_io": false, 00:07:42.555 "nvme_io_md": false, 00:07:42.555 "write_zeroes": true, 00:07:42.555 "zcopy": true, 00:07:42.555 "get_zone_info": false, 00:07:42.555 "zone_management": false, 00:07:42.555 "zone_append": false, 00:07:42.555 "compare": false, 00:07:42.555 "compare_and_write": false, 00:07:42.555 "abort": true, 00:07:42.555 "seek_hole": false, 00:07:42.555 "seek_data": false, 00:07:42.555 "copy": true, 00:07:42.555 "nvme_iov_md": false 00:07:42.555 }, 00:07:42.555 "memory_domains": [ 00:07:42.555 { 00:07:42.555 "dma_device_id": "system", 00:07:42.555 "dma_device_type": 1 00:07:42.555 }, 00:07:42.555 { 00:07:42.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.555 "dma_device_type": 2 00:07:42.555 } 00:07:42.555 ], 00:07:42.555 "driver_specific": {} 00:07:42.555 } 00:07:42.555 ] 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.555 "name": "Existed_Raid", 00:07:42.555 "uuid": "361abb13-4226-481b-b6e6-d2b21df5b916", 00:07:42.555 "strip_size_kb": 64, 00:07:42.555 "state": "configuring", 00:07:42.555 "raid_level": "raid0", 00:07:42.555 "superblock": true, 00:07:42.555 "num_base_bdevs": 2, 00:07:42.555 "num_base_bdevs_discovered": 1, 00:07:42.555 "num_base_bdevs_operational": 2, 00:07:42.555 "base_bdevs_list": [ 00:07:42.555 { 00:07:42.555 "name": "BaseBdev1", 00:07:42.555 "uuid": "05cfd1f1-3290-4473-b806-f3e041324275", 00:07:42.555 "is_configured": true, 00:07:42.555 "data_offset": 2048, 00:07:42.555 "data_size": 63488 00:07:42.555 }, 00:07:42.555 { 00:07:42.555 "name": "BaseBdev2", 00:07:42.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.555 "is_configured": false, 00:07:42.555 "data_offset": 0, 00:07:42.555 "data_size": 0 00:07:42.555 } 00:07:42.555 ] 00:07:42.555 }' 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.555 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.813 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.813 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.813 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.072 [2024-12-06 18:04:54.983995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.072 [2024-12-06 18:04:54.984082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:43.072 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.072 18:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.072 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.072 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.072 [2024-12-06 18:04:54.996087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.072 [2024-12-06 18:04:54.998257] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.073 [2024-12-06 18:04:54.998329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.073 18:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.073 "name": "Existed_Raid", 00:07:43.073 "uuid": "99d46415-2361-4dd5-827c-2a7f48cafa09", 00:07:43.073 "strip_size_kb": 64, 00:07:43.073 "state": "configuring", 00:07:43.073 "raid_level": "raid0", 00:07:43.073 "superblock": true, 00:07:43.073 "num_base_bdevs": 2, 00:07:43.073 "num_base_bdevs_discovered": 1, 00:07:43.073 "num_base_bdevs_operational": 2, 00:07:43.073 "base_bdevs_list": [ 00:07:43.073 { 00:07:43.073 "name": "BaseBdev1", 00:07:43.073 "uuid": "05cfd1f1-3290-4473-b806-f3e041324275", 00:07:43.073 "is_configured": true, 00:07:43.073 "data_offset": 2048, 00:07:43.073 "data_size": 63488 00:07:43.073 }, 00:07:43.073 { 00:07:43.073 "name": "BaseBdev2", 00:07:43.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.073 "is_configured": false, 00:07:43.073 "data_offset": 0, 00:07:43.073 "data_size": 0 00:07:43.073 } 00:07:43.073 ] 00:07:43.073 }' 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.073 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.689 [2024-12-06 18:04:55.569522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.689 [2024-12-06 18:04:55.569828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:43.689 [2024-12-06 18:04:55.569850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:43.689 [2024-12-06 18:04:55.570169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:43.689 BaseBdev2 00:07:43.689 [2024-12-06 18:04:55.570369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:43.689 [2024-12-06 18:04:55.570396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:43.689 [2024-12-06 18:04:55.570568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.689 [ 00:07:43.689 { 00:07:43.689 "name": "BaseBdev2", 00:07:43.689 "aliases": [ 00:07:43.689 "e5fe0b41-1fe0-4ef9-b5ba-92461e339149" 00:07:43.689 ], 00:07:43.689 "product_name": "Malloc disk", 00:07:43.689 "block_size": 512, 00:07:43.689 "num_blocks": 65536, 00:07:43.689 "uuid": "e5fe0b41-1fe0-4ef9-b5ba-92461e339149", 00:07:43.689 "assigned_rate_limits": { 00:07:43.689 "rw_ios_per_sec": 0, 00:07:43.689 "rw_mbytes_per_sec": 0, 00:07:43.689 "r_mbytes_per_sec": 0, 00:07:43.689 "w_mbytes_per_sec": 0 00:07:43.689 }, 00:07:43.689 "claimed": true, 00:07:43.689 "claim_type": "exclusive_write", 00:07:43.689 "zoned": false, 00:07:43.689 "supported_io_types": { 00:07:43.689 "read": true, 00:07:43.689 "write": true, 00:07:43.689 "unmap": true, 00:07:43.689 "flush": true, 00:07:43.689 "reset": true, 00:07:43.689 "nvme_admin": false, 00:07:43.689 "nvme_io": false, 00:07:43.689 "nvme_io_md": false, 00:07:43.689 "write_zeroes": true, 00:07:43.689 "zcopy": true, 00:07:43.689 "get_zone_info": false, 00:07:43.689 "zone_management": false, 00:07:43.689 "zone_append": false, 00:07:43.689 "compare": false, 00:07:43.689 "compare_and_write": false, 00:07:43.689 "abort": true, 00:07:43.689 "seek_hole": false, 00:07:43.689 "seek_data": false, 00:07:43.689 "copy": true, 00:07:43.689 "nvme_iov_md": false 00:07:43.689 }, 00:07:43.689 "memory_domains": [ 00:07:43.689 { 00:07:43.689 "dma_device_id": "system", 00:07:43.689 "dma_device_type": 1 00:07:43.689 }, 00:07:43.689 { 00:07:43.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.689 "dma_device_type": 2 00:07:43.689 } 00:07:43.689 ], 00:07:43.689 "driver_specific": {} 00:07:43.689 } 00:07:43.689 ] 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.689 "name": "Existed_Raid", 00:07:43.689 "uuid": "99d46415-2361-4dd5-827c-2a7f48cafa09", 00:07:43.689 "strip_size_kb": 64, 00:07:43.689 "state": "online", 00:07:43.689 "raid_level": "raid0", 00:07:43.689 "superblock": true, 00:07:43.689 "num_base_bdevs": 2, 00:07:43.689 "num_base_bdevs_discovered": 2, 00:07:43.689 "num_base_bdevs_operational": 2, 00:07:43.689 "base_bdevs_list": [ 00:07:43.689 { 00:07:43.689 "name": "BaseBdev1", 00:07:43.689 "uuid": "05cfd1f1-3290-4473-b806-f3e041324275", 00:07:43.689 "is_configured": true, 00:07:43.689 "data_offset": 2048, 00:07:43.689 "data_size": 63488 00:07:43.689 }, 00:07:43.689 { 00:07:43.689 "name": "BaseBdev2", 00:07:43.689 "uuid": "e5fe0b41-1fe0-4ef9-b5ba-92461e339149", 00:07:43.689 "is_configured": true, 00:07:43.689 "data_offset": 2048, 00:07:43.689 "data_size": 63488 00:07:43.689 } 00:07:43.689 ] 00:07:43.689 }' 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.689 18:04:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.949 [2024-12-06 18:04:56.029164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.949 "name": "Existed_Raid", 00:07:43.949 "aliases": [ 00:07:43.949 "99d46415-2361-4dd5-827c-2a7f48cafa09" 00:07:43.949 ], 00:07:43.949 "product_name": "Raid Volume", 00:07:43.949 "block_size": 512, 00:07:43.949 "num_blocks": 126976, 00:07:43.949 "uuid": "99d46415-2361-4dd5-827c-2a7f48cafa09", 00:07:43.949 "assigned_rate_limits": { 00:07:43.949 "rw_ios_per_sec": 0, 00:07:43.949 "rw_mbytes_per_sec": 0, 00:07:43.949 "r_mbytes_per_sec": 0, 00:07:43.949 "w_mbytes_per_sec": 0 00:07:43.949 }, 00:07:43.949 "claimed": false, 00:07:43.949 "zoned": false, 00:07:43.949 "supported_io_types": { 00:07:43.949 "read": true, 00:07:43.949 "write": true, 00:07:43.949 "unmap": true, 00:07:43.949 "flush": true, 00:07:43.949 "reset": true, 00:07:43.949 "nvme_admin": false, 00:07:43.949 "nvme_io": false, 00:07:43.949 "nvme_io_md": false, 00:07:43.949 "write_zeroes": true, 00:07:43.949 "zcopy": false, 00:07:43.949 "get_zone_info": false, 00:07:43.949 "zone_management": false, 00:07:43.949 "zone_append": false, 00:07:43.949 "compare": false, 00:07:43.949 "compare_and_write": false, 00:07:43.949 "abort": false, 00:07:43.949 "seek_hole": false, 00:07:43.949 "seek_data": false, 00:07:43.949 "copy": false, 00:07:43.949 "nvme_iov_md": false 00:07:43.949 }, 00:07:43.949 "memory_domains": [ 00:07:43.949 { 00:07:43.949 "dma_device_id": "system", 00:07:43.949 "dma_device_type": 1 00:07:43.949 }, 00:07:43.949 { 00:07:43.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.949 "dma_device_type": 2 00:07:43.949 }, 00:07:43.949 { 00:07:43.949 "dma_device_id": "system", 00:07:43.949 "dma_device_type": 1 00:07:43.949 }, 00:07:43.949 { 00:07:43.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.949 "dma_device_type": 2 00:07:43.949 } 00:07:43.949 ], 00:07:43.949 "driver_specific": { 00:07:43.949 "raid": { 00:07:43.949 "uuid": "99d46415-2361-4dd5-827c-2a7f48cafa09", 00:07:43.949 "strip_size_kb": 64, 00:07:43.949 "state": "online", 00:07:43.949 "raid_level": "raid0", 00:07:43.949 "superblock": true, 00:07:43.949 "num_base_bdevs": 2, 00:07:43.949 "num_base_bdevs_discovered": 2, 00:07:43.949 "num_base_bdevs_operational": 2, 00:07:43.949 "base_bdevs_list": [ 00:07:43.949 { 00:07:43.949 "name": "BaseBdev1", 00:07:43.949 "uuid": "05cfd1f1-3290-4473-b806-f3e041324275", 00:07:43.949 "is_configured": true, 00:07:43.949 "data_offset": 2048, 00:07:43.949 "data_size": 63488 00:07:43.949 }, 00:07:43.949 { 00:07:43.949 "name": "BaseBdev2", 00:07:43.949 "uuid": "e5fe0b41-1fe0-4ef9-b5ba-92461e339149", 00:07:43.949 "is_configured": true, 00:07:43.949 "data_offset": 2048, 00:07:43.949 "data_size": 63488 00:07:43.949 } 00:07:43.949 ] 00:07:43.949 } 00:07:43.949 } 00:07:43.949 }' 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:43.949 BaseBdev2' 00:07:43.949 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.208 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.209 [2024-12-06 18:04:56.220569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:44.209 [2024-12-06 18:04:56.220623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.209 [2024-12-06 18:04:56.220688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.209 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.468 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.468 "name": "Existed_Raid", 00:07:44.468 "uuid": "99d46415-2361-4dd5-827c-2a7f48cafa09", 00:07:44.468 "strip_size_kb": 64, 00:07:44.468 "state": "offline", 00:07:44.468 "raid_level": "raid0", 00:07:44.468 "superblock": true, 00:07:44.468 "num_base_bdevs": 2, 00:07:44.468 "num_base_bdevs_discovered": 1, 00:07:44.468 "num_base_bdevs_operational": 1, 00:07:44.468 "base_bdevs_list": [ 00:07:44.468 { 00:07:44.468 "name": null, 00:07:44.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.468 "is_configured": false, 00:07:44.468 "data_offset": 0, 00:07:44.468 "data_size": 63488 00:07:44.468 }, 00:07:44.468 { 00:07:44.468 "name": "BaseBdev2", 00:07:44.468 "uuid": "e5fe0b41-1fe0-4ef9-b5ba-92461e339149", 00:07:44.468 "is_configured": true, 00:07:44.468 "data_offset": 2048, 00:07:44.468 "data_size": 63488 00:07:44.468 } 00:07:44.468 ] 00:07:44.468 }' 00:07:44.468 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.468 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.726 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.726 [2024-12-06 18:04:56.868711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:44.726 [2024-12-06 18:04:56.868789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:44.985 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.985 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:44.985 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.985 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:44.985 18:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.985 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.985 18:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61330 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61330 ']' 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61330 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61330 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.985 killing process with pid 61330 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61330' 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61330 00:07:44.985 18:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61330 00:07:44.985 [2024-12-06 18:04:57.082426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.985 [2024-12-06 18:04:57.102919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.364 18:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:46.364 00:07:46.364 real 0m5.524s 00:07:46.364 user 0m7.967s 00:07:46.364 sys 0m0.818s 00:07:46.364 18:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.364 18:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.364 ************************************ 00:07:46.364 END TEST raid_state_function_test_sb 00:07:46.364 ************************************ 00:07:46.364 18:04:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:46.364 18:04:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:46.364 18:04:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.364 18:04:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.364 ************************************ 00:07:46.364 START TEST raid_superblock_test 00:07:46.364 ************************************ 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61582 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61582 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61582 ']' 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.364 18:04:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.624 [2024-12-06 18:04:58.598645] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:46.624 [2024-12-06 18:04:58.598781] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61582 ] 00:07:46.624 [2024-12-06 18:04:58.781897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.883 [2024-12-06 18:04:58.918512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.144 [2024-12-06 18:04:59.165183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.144 [2024-12-06 18:04:59.165259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.404 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.404 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:47.404 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:47.404 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:47.404 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:47.404 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:47.404 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.662 malloc1 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.662 [2024-12-06 18:04:59.622800] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:47.662 [2024-12-06 18:04:59.622899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.662 [2024-12-06 18:04:59.622928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:47.662 [2024-12-06 18:04:59.622939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.662 [2024-12-06 18:04:59.625665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.662 [2024-12-06 18:04:59.625727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:47.662 pt1 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:47.662 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.663 malloc2 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.663 [2024-12-06 18:04:59.685002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.663 [2024-12-06 18:04:59.685106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.663 [2024-12-06 18:04:59.685148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:47.663 [2024-12-06 18:04:59.685159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.663 [2024-12-06 18:04:59.687847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.663 [2024-12-06 18:04:59.687903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.663 pt2 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.663 [2024-12-06 18:04:59.697120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:47.663 [2024-12-06 18:04:59.699387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.663 [2024-12-06 18:04:59.699636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:47.663 [2024-12-06 18:04:59.699662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.663 [2024-12-06 18:04:59.700022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.663 [2024-12-06 18:04:59.700242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:47.663 [2024-12-06 18:04:59.700265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:47.663 [2024-12-06 18:04:59.700492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.663 "name": "raid_bdev1", 00:07:47.663 "uuid": "288afdc6-031b-45c1-880f-82fa824e29f8", 00:07:47.663 "strip_size_kb": 64, 00:07:47.663 "state": "online", 00:07:47.663 "raid_level": "raid0", 00:07:47.663 "superblock": true, 00:07:47.663 "num_base_bdevs": 2, 00:07:47.663 "num_base_bdevs_discovered": 2, 00:07:47.663 "num_base_bdevs_operational": 2, 00:07:47.663 "base_bdevs_list": [ 00:07:47.663 { 00:07:47.663 "name": "pt1", 00:07:47.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.663 "is_configured": true, 00:07:47.663 "data_offset": 2048, 00:07:47.663 "data_size": 63488 00:07:47.663 }, 00:07:47.663 { 00:07:47.663 "name": "pt2", 00:07:47.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.663 "is_configured": true, 00:07:47.663 "data_offset": 2048, 00:07:47.663 "data_size": 63488 00:07:47.663 } 00:07:47.663 ] 00:07:47.663 }' 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.663 18:04:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.230 [2024-12-06 18:05:00.136629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.230 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.230 "name": "raid_bdev1", 00:07:48.230 "aliases": [ 00:07:48.230 "288afdc6-031b-45c1-880f-82fa824e29f8" 00:07:48.230 ], 00:07:48.230 "product_name": "Raid Volume", 00:07:48.230 "block_size": 512, 00:07:48.230 "num_blocks": 126976, 00:07:48.230 "uuid": "288afdc6-031b-45c1-880f-82fa824e29f8", 00:07:48.230 "assigned_rate_limits": { 00:07:48.230 "rw_ios_per_sec": 0, 00:07:48.230 "rw_mbytes_per_sec": 0, 00:07:48.230 "r_mbytes_per_sec": 0, 00:07:48.230 "w_mbytes_per_sec": 0 00:07:48.230 }, 00:07:48.230 "claimed": false, 00:07:48.230 "zoned": false, 00:07:48.230 "supported_io_types": { 00:07:48.230 "read": true, 00:07:48.230 "write": true, 00:07:48.230 "unmap": true, 00:07:48.230 "flush": true, 00:07:48.230 "reset": true, 00:07:48.230 "nvme_admin": false, 00:07:48.230 "nvme_io": false, 00:07:48.230 "nvme_io_md": false, 00:07:48.230 "write_zeroes": true, 00:07:48.230 "zcopy": false, 00:07:48.230 "get_zone_info": false, 00:07:48.230 "zone_management": false, 00:07:48.230 "zone_append": false, 00:07:48.230 "compare": false, 00:07:48.230 "compare_and_write": false, 00:07:48.230 "abort": false, 00:07:48.230 "seek_hole": false, 00:07:48.230 "seek_data": false, 00:07:48.230 "copy": false, 00:07:48.230 "nvme_iov_md": false 00:07:48.230 }, 00:07:48.231 "memory_domains": [ 00:07:48.231 { 00:07:48.231 "dma_device_id": "system", 00:07:48.231 "dma_device_type": 1 00:07:48.231 }, 00:07:48.231 { 00:07:48.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.231 "dma_device_type": 2 00:07:48.231 }, 00:07:48.231 { 00:07:48.231 "dma_device_id": "system", 00:07:48.231 "dma_device_type": 1 00:07:48.231 }, 00:07:48.231 { 00:07:48.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.231 "dma_device_type": 2 00:07:48.231 } 00:07:48.231 ], 00:07:48.231 "driver_specific": { 00:07:48.231 "raid": { 00:07:48.231 "uuid": "288afdc6-031b-45c1-880f-82fa824e29f8", 00:07:48.231 "strip_size_kb": 64, 00:07:48.231 "state": "online", 00:07:48.231 "raid_level": "raid0", 00:07:48.231 "superblock": true, 00:07:48.231 "num_base_bdevs": 2, 00:07:48.231 "num_base_bdevs_discovered": 2, 00:07:48.231 "num_base_bdevs_operational": 2, 00:07:48.231 "base_bdevs_list": [ 00:07:48.231 { 00:07:48.231 "name": "pt1", 00:07:48.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.231 "is_configured": true, 00:07:48.231 "data_offset": 2048, 00:07:48.231 "data_size": 63488 00:07:48.231 }, 00:07:48.231 { 00:07:48.231 "name": "pt2", 00:07:48.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.231 "is_configured": true, 00:07:48.231 "data_offset": 2048, 00:07:48.231 "data_size": 63488 00:07:48.231 } 00:07:48.231 ] 00:07:48.231 } 00:07:48.231 } 00:07:48.231 }' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:48.231 pt2' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.231 [2024-12-06 18:05:00.376466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.231 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=288afdc6-031b-45c1-880f-82fa824e29f8 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 288afdc6-031b-45c1-880f-82fa824e29f8 ']' 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 [2024-12-06 18:05:00.419967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.491 [2024-12-06 18:05:00.420017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.491 [2024-12-06 18:05:00.420148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.491 [2024-12-06 18:05:00.420208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.491 [2024-12-06 18:05:00.420223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 [2024-12-06 18:05:00.547852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:48.491 [2024-12-06 18:05:00.550151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:48.491 [2024-12-06 18:05:00.550277] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:48.491 [2024-12-06 18:05:00.550395] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:48.491 [2024-12-06 18:05:00.550431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.491 [2024-12-06 18:05:00.550457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:48.491 request: 00:07:48.491 { 00:07:48.491 "name": "raid_bdev1", 00:07:48.491 "raid_level": "raid0", 00:07:48.491 "base_bdevs": [ 00:07:48.491 "malloc1", 00:07:48.491 "malloc2" 00:07:48.491 ], 00:07:48.491 "strip_size_kb": 64, 00:07:48.491 "superblock": false, 00:07:48.491 "method": "bdev_raid_create", 00:07:48.491 "req_id": 1 00:07:48.491 } 00:07:48.491 Got JSON-RPC error response 00:07:48.491 response: 00:07:48.491 { 00:07:48.491 "code": -17, 00:07:48.491 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:48.491 } 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.491 [2024-12-06 18:05:00.599787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:48.491 [2024-12-06 18:05:00.599887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.491 [2024-12-06 18:05:00.599918] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:48.491 [2024-12-06 18:05:00.599937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.491 [2024-12-06 18:05:00.602628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.491 [2024-12-06 18:05:00.602696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:48.491 [2024-12-06 18:05:00.602852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:48.491 [2024-12-06 18:05:00.602947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:48.491 pt1 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.491 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.492 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.751 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.751 "name": "raid_bdev1", 00:07:48.751 "uuid": "288afdc6-031b-45c1-880f-82fa824e29f8", 00:07:48.751 "strip_size_kb": 64, 00:07:48.751 "state": "configuring", 00:07:48.751 "raid_level": "raid0", 00:07:48.751 "superblock": true, 00:07:48.751 "num_base_bdevs": 2, 00:07:48.751 "num_base_bdevs_discovered": 1, 00:07:48.751 "num_base_bdevs_operational": 2, 00:07:48.751 "base_bdevs_list": [ 00:07:48.751 { 00:07:48.751 "name": "pt1", 00:07:48.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.751 "is_configured": true, 00:07:48.751 "data_offset": 2048, 00:07:48.751 "data_size": 63488 00:07:48.751 }, 00:07:48.751 { 00:07:48.751 "name": null, 00:07:48.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.751 "is_configured": false, 00:07:48.751 "data_offset": 2048, 00:07:48.751 "data_size": 63488 00:07:48.751 } 00:07:48.751 ] 00:07:48.751 }' 00:07:48.751 18:05:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.751 18:05:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 [2024-12-06 18:05:01.035811] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:49.009 [2024-12-06 18:05:01.035934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.009 [2024-12-06 18:05:01.035972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:49.009 [2024-12-06 18:05:01.035997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.009 [2024-12-06 18:05:01.036610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.009 [2024-12-06 18:05:01.036654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:49.009 [2024-12-06 18:05:01.036789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:49.009 [2024-12-06 18:05:01.036844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:49.009 [2024-12-06 18:05:01.037025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:49.009 [2024-12-06 18:05:01.037053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:49.009 [2024-12-06 18:05:01.037393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:49.009 [2024-12-06 18:05:01.037601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:49.009 [2024-12-06 18:05:01.037623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:49.009 [2024-12-06 18:05:01.037833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.009 pt2 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.009 "name": "raid_bdev1", 00:07:49.009 "uuid": "288afdc6-031b-45c1-880f-82fa824e29f8", 00:07:49.009 "strip_size_kb": 64, 00:07:49.009 "state": "online", 00:07:49.009 "raid_level": "raid0", 00:07:49.009 "superblock": true, 00:07:49.009 "num_base_bdevs": 2, 00:07:49.009 "num_base_bdevs_discovered": 2, 00:07:49.009 "num_base_bdevs_operational": 2, 00:07:49.009 "base_bdevs_list": [ 00:07:49.009 { 00:07:49.009 "name": "pt1", 00:07:49.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.009 "is_configured": true, 00:07:49.009 "data_offset": 2048, 00:07:49.009 "data_size": 63488 00:07:49.009 }, 00:07:49.009 { 00:07:49.009 "name": "pt2", 00:07:49.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.009 "is_configured": true, 00:07:49.009 "data_offset": 2048, 00:07:49.009 "data_size": 63488 00:07:49.009 } 00:07:49.009 ] 00:07:49.009 }' 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.009 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.575 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:49.575 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.576 [2024-12-06 18:05:01.480060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.576 "name": "raid_bdev1", 00:07:49.576 "aliases": [ 00:07:49.576 "288afdc6-031b-45c1-880f-82fa824e29f8" 00:07:49.576 ], 00:07:49.576 "product_name": "Raid Volume", 00:07:49.576 "block_size": 512, 00:07:49.576 "num_blocks": 126976, 00:07:49.576 "uuid": "288afdc6-031b-45c1-880f-82fa824e29f8", 00:07:49.576 "assigned_rate_limits": { 00:07:49.576 "rw_ios_per_sec": 0, 00:07:49.576 "rw_mbytes_per_sec": 0, 00:07:49.576 "r_mbytes_per_sec": 0, 00:07:49.576 "w_mbytes_per_sec": 0 00:07:49.576 }, 00:07:49.576 "claimed": false, 00:07:49.576 "zoned": false, 00:07:49.576 "supported_io_types": { 00:07:49.576 "read": true, 00:07:49.576 "write": true, 00:07:49.576 "unmap": true, 00:07:49.576 "flush": true, 00:07:49.576 "reset": true, 00:07:49.576 "nvme_admin": false, 00:07:49.576 "nvme_io": false, 00:07:49.576 "nvme_io_md": false, 00:07:49.576 "write_zeroes": true, 00:07:49.576 "zcopy": false, 00:07:49.576 "get_zone_info": false, 00:07:49.576 "zone_management": false, 00:07:49.576 "zone_append": false, 00:07:49.576 "compare": false, 00:07:49.576 "compare_and_write": false, 00:07:49.576 "abort": false, 00:07:49.576 "seek_hole": false, 00:07:49.576 "seek_data": false, 00:07:49.576 "copy": false, 00:07:49.576 "nvme_iov_md": false 00:07:49.576 }, 00:07:49.576 "memory_domains": [ 00:07:49.576 { 00:07:49.576 "dma_device_id": "system", 00:07:49.576 "dma_device_type": 1 00:07:49.576 }, 00:07:49.576 { 00:07:49.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.576 "dma_device_type": 2 00:07:49.576 }, 00:07:49.576 { 00:07:49.576 "dma_device_id": "system", 00:07:49.576 "dma_device_type": 1 00:07:49.576 }, 00:07:49.576 { 00:07:49.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.576 "dma_device_type": 2 00:07:49.576 } 00:07:49.576 ], 00:07:49.576 "driver_specific": { 00:07:49.576 "raid": { 00:07:49.576 "uuid": "288afdc6-031b-45c1-880f-82fa824e29f8", 00:07:49.576 "strip_size_kb": 64, 00:07:49.576 "state": "online", 00:07:49.576 "raid_level": "raid0", 00:07:49.576 "superblock": true, 00:07:49.576 "num_base_bdevs": 2, 00:07:49.576 "num_base_bdevs_discovered": 2, 00:07:49.576 "num_base_bdevs_operational": 2, 00:07:49.576 "base_bdevs_list": [ 00:07:49.576 { 00:07:49.576 "name": "pt1", 00:07:49.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.576 "is_configured": true, 00:07:49.576 "data_offset": 2048, 00:07:49.576 "data_size": 63488 00:07:49.576 }, 00:07:49.576 { 00:07:49.576 "name": "pt2", 00:07:49.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.576 "is_configured": true, 00:07:49.576 "data_offset": 2048, 00:07:49.576 "data_size": 63488 00:07:49.576 } 00:07:49.576 ] 00:07:49.576 } 00:07:49.576 } 00:07:49.576 }' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:49.576 pt2' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:49.576 [2024-12-06 18:05:01.707770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.576 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 288afdc6-031b-45c1-880f-82fa824e29f8 '!=' 288afdc6-031b-45c1-880f-82fa824e29f8 ']' 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61582 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61582 ']' 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61582 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61582 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.834 killing process with pid 61582 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61582' 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61582 00:07:49.834 [2024-12-06 18:05:01.796363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.834 18:05:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61582 00:07:49.834 [2024-12-06 18:05:01.796482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.835 [2024-12-06 18:05:01.796545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.835 [2024-12-06 18:05:01.796561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:50.093 [2024-12-06 18:05:02.021552] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.468 18:05:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:51.468 00:07:51.468 real 0m4.752s 00:07:51.468 user 0m6.675s 00:07:51.468 sys 0m0.759s 00:07:51.468 18:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.468 18:05:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.468 ************************************ 00:07:51.468 END TEST raid_superblock_test 00:07:51.468 ************************************ 00:07:51.468 18:05:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:51.468 18:05:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:51.468 18:05:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.468 18:05:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.468 ************************************ 00:07:51.468 START TEST raid_read_error_test 00:07:51.468 ************************************ 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:51.468 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rrEInbGhd7 00:07:51.469 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:51.469 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61799 00:07:51.469 18:05:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61799 00:07:51.469 18:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61799 ']' 00:07:51.469 18:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.469 18:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.469 18:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.469 18:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.469 18:05:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.469 [2024-12-06 18:05:03.401598] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:51.469 [2024-12-06 18:05:03.401729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61799 ] 00:07:51.469 [2024-12-06 18:05:03.579431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.727 [2024-12-06 18:05:03.702770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.986 [2024-12-06 18:05:03.911431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.986 [2024-12-06 18:05:03.911502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.246 BaseBdev1_malloc 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.246 true 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.246 [2024-12-06 18:05:04.339326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:52.246 [2024-12-06 18:05:04.339386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.246 [2024-12-06 18:05:04.339407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:52.246 [2024-12-06 18:05:04.339418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.246 [2024-12-06 18:05:04.341573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.246 [2024-12-06 18:05:04.341617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:52.246 BaseBdev1 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.246 BaseBdev2_malloc 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.246 true 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.246 [2024-12-06 18:05:04.406601] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:52.246 [2024-12-06 18:05:04.406662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.246 [2024-12-06 18:05:04.406681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:52.246 [2024-12-06 18:05:04.406692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.246 [2024-12-06 18:05:04.409004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.246 [2024-12-06 18:05:04.409050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:52.246 BaseBdev2 00:07:52.246 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.505 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.506 [2024-12-06 18:05:04.418645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.506 [2024-12-06 18:05:04.420611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.506 [2024-12-06 18:05:04.420862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.506 [2024-12-06 18:05:04.420890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.506 [2024-12-06 18:05:04.421171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:52.506 [2024-12-06 18:05:04.421377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.506 [2024-12-06 18:05:04.421398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:52.506 [2024-12-06 18:05:04.421591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.506 "name": "raid_bdev1", 00:07:52.506 "uuid": "c18de41a-0039-4397-9a48-ba3bb4ffbd94", 00:07:52.506 "strip_size_kb": 64, 00:07:52.506 "state": "online", 00:07:52.506 "raid_level": "raid0", 00:07:52.506 "superblock": true, 00:07:52.506 "num_base_bdevs": 2, 00:07:52.506 "num_base_bdevs_discovered": 2, 00:07:52.506 "num_base_bdevs_operational": 2, 00:07:52.506 "base_bdevs_list": [ 00:07:52.506 { 00:07:52.506 "name": "BaseBdev1", 00:07:52.506 "uuid": "3ede0e97-2c32-5608-992e-60779adf0612", 00:07:52.506 "is_configured": true, 00:07:52.506 "data_offset": 2048, 00:07:52.506 "data_size": 63488 00:07:52.506 }, 00:07:52.506 { 00:07:52.506 "name": "BaseBdev2", 00:07:52.506 "uuid": "88a0120d-c8df-5ad2-9fb2-c5213c7d3ac8", 00:07:52.506 "is_configured": true, 00:07:52.506 "data_offset": 2048, 00:07:52.506 "data_size": 63488 00:07:52.506 } 00:07:52.506 ] 00:07:52.506 }' 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.506 18:05:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.764 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:52.764 18:05:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:53.022 [2024-12-06 18:05:04.951264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.957 "name": "raid_bdev1", 00:07:53.957 "uuid": "c18de41a-0039-4397-9a48-ba3bb4ffbd94", 00:07:53.957 "strip_size_kb": 64, 00:07:53.957 "state": "online", 00:07:53.957 "raid_level": "raid0", 00:07:53.957 "superblock": true, 00:07:53.957 "num_base_bdevs": 2, 00:07:53.957 "num_base_bdevs_discovered": 2, 00:07:53.957 "num_base_bdevs_operational": 2, 00:07:53.957 "base_bdevs_list": [ 00:07:53.957 { 00:07:53.957 "name": "BaseBdev1", 00:07:53.957 "uuid": "3ede0e97-2c32-5608-992e-60779adf0612", 00:07:53.957 "is_configured": true, 00:07:53.957 "data_offset": 2048, 00:07:53.957 "data_size": 63488 00:07:53.957 }, 00:07:53.957 { 00:07:53.957 "name": "BaseBdev2", 00:07:53.957 "uuid": "88a0120d-c8df-5ad2-9fb2-c5213c7d3ac8", 00:07:53.957 "is_configured": true, 00:07:53.957 "data_offset": 2048, 00:07:53.957 "data_size": 63488 00:07:53.957 } 00:07:53.957 ] 00:07:53.957 }' 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.957 18:05:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.216 [2024-12-06 18:05:06.332465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.216 [2024-12-06 18:05:06.332520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.216 [2024-12-06 18:05:06.335883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.216 [2024-12-06 18:05:06.335953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.216 [2024-12-06 18:05:06.335991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.216 [2024-12-06 18:05:06.336005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:54.216 { 00:07:54.216 "results": [ 00:07:54.216 { 00:07:54.216 "job": "raid_bdev1", 00:07:54.216 "core_mask": "0x1", 00:07:54.216 "workload": "randrw", 00:07:54.216 "percentage": 50, 00:07:54.216 "status": "finished", 00:07:54.216 "queue_depth": 1, 00:07:54.216 "io_size": 131072, 00:07:54.216 "runtime": 1.382007, 00:07:54.216 "iops": 13283.579605602577, 00:07:54.216 "mibps": 1660.4474507003222, 00:07:54.216 "io_failed": 1, 00:07:54.216 "io_timeout": 0, 00:07:54.216 "avg_latency_us": 104.20493471902338, 00:07:54.216 "min_latency_us": 28.28296943231441, 00:07:54.216 "max_latency_us": 1717.1004366812226 00:07:54.216 } 00:07:54.216 ], 00:07:54.216 "core_count": 1 00:07:54.216 } 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61799 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61799 ']' 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61799 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61799 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61799' 00:07:54.216 killing process with pid 61799 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61799 00:07:54.216 [2024-12-06 18:05:06.379555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.216 18:05:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61799 00:07:54.475 [2024-12-06 18:05:06.541877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rrEInbGhd7 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:55.856 00:07:55.856 real 0m4.655s 00:07:55.856 user 0m5.549s 00:07:55.856 sys 0m0.525s 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.856 18:05:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.856 ************************************ 00:07:55.856 END TEST raid_read_error_test 00:07:55.856 ************************************ 00:07:55.856 18:05:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:55.856 18:05:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:55.856 18:05:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.856 18:05:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.856 ************************************ 00:07:55.856 START TEST raid_write_error_test 00:07:55.856 ************************************ 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.a6T21DwHUQ 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61945 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61945 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61945 ']' 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.856 18:05:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.115 [2024-12-06 18:05:08.129267] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:07:56.115 [2024-12-06 18:05:08.129463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61945 ] 00:07:56.375 [2024-12-06 18:05:08.314486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.375 [2024-12-06 18:05:08.450656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.633 [2024-12-06 18:05:08.687119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.633 [2024-12-06 18:05:08.687178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.893 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.893 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:56.893 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:56.893 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:56.893 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.893 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.152 BaseBdev1_malloc 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.152 true 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.152 [2024-12-06 18:05:09.119667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:57.152 [2024-12-06 18:05:09.119757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.152 [2024-12-06 18:05:09.119789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:57.152 [2024-12-06 18:05:09.119804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.152 [2024-12-06 18:05:09.122442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.152 [2024-12-06 18:05:09.122502] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:57.152 BaseBdev1 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.152 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.153 BaseBdev2_malloc 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.153 true 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.153 [2024-12-06 18:05:09.193642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:57.153 [2024-12-06 18:05:09.193736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.153 [2024-12-06 18:05:09.193762] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:57.153 [2024-12-06 18:05:09.193776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.153 [2024-12-06 18:05:09.196528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.153 [2024-12-06 18:05:09.196591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:57.153 BaseBdev2 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.153 [2024-12-06 18:05:09.205730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.153 [2024-12-06 18:05:09.208013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.153 [2024-12-06 18:05:09.208314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:57.153 [2024-12-06 18:05:09.208345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.153 [2024-12-06 18:05:09.208674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:57.153 [2024-12-06 18:05:09.208894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:57.153 [2024-12-06 18:05:09.208918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:57.153 [2024-12-06 18:05:09.209160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.153 "name": "raid_bdev1", 00:07:57.153 "uuid": "997e6882-e7d0-4383-b7d3-2b3cfc601e13", 00:07:57.153 "strip_size_kb": 64, 00:07:57.153 "state": "online", 00:07:57.153 "raid_level": "raid0", 00:07:57.153 "superblock": true, 00:07:57.153 "num_base_bdevs": 2, 00:07:57.153 "num_base_bdevs_discovered": 2, 00:07:57.153 "num_base_bdevs_operational": 2, 00:07:57.153 "base_bdevs_list": [ 00:07:57.153 { 00:07:57.153 "name": "BaseBdev1", 00:07:57.153 "uuid": "3fe0b2df-e5f4-5007-a289-833eef6ccf80", 00:07:57.153 "is_configured": true, 00:07:57.153 "data_offset": 2048, 00:07:57.153 "data_size": 63488 00:07:57.153 }, 00:07:57.153 { 00:07:57.153 "name": "BaseBdev2", 00:07:57.153 "uuid": "ce854e9b-045e-5ebc-b601-573b5566dc9a", 00:07:57.153 "is_configured": true, 00:07:57.153 "data_offset": 2048, 00:07:57.153 "data_size": 63488 00:07:57.153 } 00:07:57.153 ] 00:07:57.153 }' 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.153 18:05:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.721 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:57.721 18:05:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:57.721 [2024-12-06 18:05:09.814258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.661 "name": "raid_bdev1", 00:07:58.661 "uuid": "997e6882-e7d0-4383-b7d3-2b3cfc601e13", 00:07:58.661 "strip_size_kb": 64, 00:07:58.661 "state": "online", 00:07:58.661 "raid_level": "raid0", 00:07:58.661 "superblock": true, 00:07:58.661 "num_base_bdevs": 2, 00:07:58.661 "num_base_bdevs_discovered": 2, 00:07:58.661 "num_base_bdevs_operational": 2, 00:07:58.661 "base_bdevs_list": [ 00:07:58.661 { 00:07:58.661 "name": "BaseBdev1", 00:07:58.661 "uuid": "3fe0b2df-e5f4-5007-a289-833eef6ccf80", 00:07:58.661 "is_configured": true, 00:07:58.661 "data_offset": 2048, 00:07:58.661 "data_size": 63488 00:07:58.661 }, 00:07:58.661 { 00:07:58.661 "name": "BaseBdev2", 00:07:58.661 "uuid": "ce854e9b-045e-5ebc-b601-573b5566dc9a", 00:07:58.661 "is_configured": true, 00:07:58.661 "data_offset": 2048, 00:07:58.661 "data_size": 63488 00:07:58.661 } 00:07:58.661 ] 00:07:58.661 }' 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.661 18:05:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.230 [2024-12-06 18:05:11.151262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.230 [2024-12-06 18:05:11.151316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.230 [2024-12-06 18:05:11.154626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.230 [2024-12-06 18:05:11.154687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.230 [2024-12-06 18:05:11.154727] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.230 [2024-12-06 18:05:11.154741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:59.230 { 00:07:59.230 "results": [ 00:07:59.230 { 00:07:59.230 "job": "raid_bdev1", 00:07:59.230 "core_mask": "0x1", 00:07:59.230 "workload": "randrw", 00:07:59.230 "percentage": 50, 00:07:59.230 "status": "finished", 00:07:59.230 "queue_depth": 1, 00:07:59.230 "io_size": 131072, 00:07:59.230 "runtime": 1.337495, 00:07:59.230 "iops": 12746.963540050618, 00:07:59.230 "mibps": 1593.3704425063272, 00:07:59.230 "io_failed": 1, 00:07:59.230 "io_timeout": 0, 00:07:59.230 "avg_latency_us": 108.76062421083635, 00:07:59.230 "min_latency_us": 32.64279475982533, 00:07:59.230 "max_latency_us": 1767.1825327510917 00:07:59.230 } 00:07:59.230 ], 00:07:59.230 "core_count": 1 00:07:59.230 } 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61945 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61945 ']' 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61945 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61945 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.230 killing process with pid 61945 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61945' 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61945 00:07:59.230 18:05:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61945 00:07:59.230 [2024-12-06 18:05:11.185373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.230 [2024-12-06 18:05:11.349235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.a6T21DwHUQ 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:00.608 00:08:00.608 real 0m4.762s 00:08:00.608 user 0m5.727s 00:08:00.608 sys 0m0.578s 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.608 18:05:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.608 ************************************ 00:08:00.608 END TEST raid_write_error_test 00:08:00.608 ************************************ 00:08:00.867 18:05:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:00.867 18:05:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:00.867 18:05:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.867 18:05:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.867 18:05:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.867 ************************************ 00:08:00.867 START TEST raid_state_function_test 00:08:00.867 ************************************ 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62088 00:08:00.867 Process raid pid: 62088 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62088' 00:08:00.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62088 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62088 ']' 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.867 18:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.868 18:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.868 18:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.868 18:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.868 [2024-12-06 18:05:12.939079] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:00.868 [2024-12-06 18:05:12.939409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.128 [2024-12-06 18:05:13.137779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.128 [2024-12-06 18:05:13.275736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.387 [2024-12-06 18:05:13.523933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.387 [2024-12-06 18:05:13.524114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.958 [2024-12-06 18:05:13.854946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.958 [2024-12-06 18:05:13.855023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.958 [2024-12-06 18:05:13.855039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.958 [2024-12-06 18:05:13.855052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.958 "name": "Existed_Raid", 00:08:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.958 "strip_size_kb": 64, 00:08:01.958 "state": "configuring", 00:08:01.958 "raid_level": "concat", 00:08:01.958 "superblock": false, 00:08:01.958 "num_base_bdevs": 2, 00:08:01.958 "num_base_bdevs_discovered": 0, 00:08:01.958 "num_base_bdevs_operational": 2, 00:08:01.958 "base_bdevs_list": [ 00:08:01.958 { 00:08:01.958 "name": "BaseBdev1", 00:08:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.958 "is_configured": false, 00:08:01.958 "data_offset": 0, 00:08:01.958 "data_size": 0 00:08:01.958 }, 00:08:01.958 { 00:08:01.958 "name": "BaseBdev2", 00:08:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.958 "is_configured": false, 00:08:01.958 "data_offset": 0, 00:08:01.958 "data_size": 0 00:08:01.958 } 00:08:01.958 ] 00:08:01.958 }' 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.958 18:05:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.259 [2024-12-06 18:05:14.322131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.259 [2024-12-06 18:05:14.322185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.259 [2024-12-06 18:05:14.334148] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.259 [2024-12-06 18:05:14.334299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.259 [2024-12-06 18:05:14.334374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.259 [2024-12-06 18:05:14.334414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.259 [2024-12-06 18:05:14.387795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.259 BaseBdev1 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.259 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.259 [ 00:08:02.259 { 00:08:02.259 "name": "BaseBdev1", 00:08:02.259 "aliases": [ 00:08:02.259 "3b046018-1054-4e27-8ad5-f006550844d2" 00:08:02.259 ], 00:08:02.259 "product_name": "Malloc disk", 00:08:02.259 "block_size": 512, 00:08:02.259 "num_blocks": 65536, 00:08:02.259 "uuid": "3b046018-1054-4e27-8ad5-f006550844d2", 00:08:02.259 "assigned_rate_limits": { 00:08:02.259 "rw_ios_per_sec": 0, 00:08:02.259 "rw_mbytes_per_sec": 0, 00:08:02.259 "r_mbytes_per_sec": 0, 00:08:02.259 "w_mbytes_per_sec": 0 00:08:02.259 }, 00:08:02.259 "claimed": true, 00:08:02.259 "claim_type": "exclusive_write", 00:08:02.259 "zoned": false, 00:08:02.259 "supported_io_types": { 00:08:02.259 "read": true, 00:08:02.259 "write": true, 00:08:02.259 "unmap": true, 00:08:02.259 "flush": true, 00:08:02.259 "reset": true, 00:08:02.259 "nvme_admin": false, 00:08:02.259 "nvme_io": false, 00:08:02.259 "nvme_io_md": false, 00:08:02.259 "write_zeroes": true, 00:08:02.259 "zcopy": true, 00:08:02.259 "get_zone_info": false, 00:08:02.259 "zone_management": false, 00:08:02.259 "zone_append": false, 00:08:02.259 "compare": false, 00:08:02.259 "compare_and_write": false, 00:08:02.259 "abort": true, 00:08:02.259 "seek_hole": false, 00:08:02.259 "seek_data": false, 00:08:02.259 "copy": true, 00:08:02.259 "nvme_iov_md": false 00:08:02.259 }, 00:08:02.259 "memory_domains": [ 00:08:02.259 { 00:08:02.259 "dma_device_id": "system", 00:08:02.259 "dma_device_type": 1 00:08:02.259 }, 00:08:02.259 { 00:08:02.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.259 "dma_device_type": 2 00:08:02.259 } 00:08:02.259 ], 00:08:02.519 "driver_specific": {} 00:08:02.520 } 00:08:02.520 ] 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.520 "name": "Existed_Raid", 00:08:02.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.520 "strip_size_kb": 64, 00:08:02.520 "state": "configuring", 00:08:02.520 "raid_level": "concat", 00:08:02.520 "superblock": false, 00:08:02.520 "num_base_bdevs": 2, 00:08:02.520 "num_base_bdevs_discovered": 1, 00:08:02.520 "num_base_bdevs_operational": 2, 00:08:02.520 "base_bdevs_list": [ 00:08:02.520 { 00:08:02.520 "name": "BaseBdev1", 00:08:02.520 "uuid": "3b046018-1054-4e27-8ad5-f006550844d2", 00:08:02.520 "is_configured": true, 00:08:02.520 "data_offset": 0, 00:08:02.520 "data_size": 65536 00:08:02.520 }, 00:08:02.520 { 00:08:02.520 "name": "BaseBdev2", 00:08:02.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.520 "is_configured": false, 00:08:02.520 "data_offset": 0, 00:08:02.520 "data_size": 0 00:08:02.520 } 00:08:02.520 ] 00:08:02.520 }' 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.520 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.779 [2024-12-06 18:05:14.851687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.779 [2024-12-06 18:05:14.851864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.779 [2024-12-06 18:05:14.863766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.779 [2024-12-06 18:05:14.866115] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.779 [2024-12-06 18:05:14.866256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.779 "name": "Existed_Raid", 00:08:02.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.779 "strip_size_kb": 64, 00:08:02.779 "state": "configuring", 00:08:02.779 "raid_level": "concat", 00:08:02.779 "superblock": false, 00:08:02.779 "num_base_bdevs": 2, 00:08:02.779 "num_base_bdevs_discovered": 1, 00:08:02.779 "num_base_bdevs_operational": 2, 00:08:02.779 "base_bdevs_list": [ 00:08:02.779 { 00:08:02.779 "name": "BaseBdev1", 00:08:02.779 "uuid": "3b046018-1054-4e27-8ad5-f006550844d2", 00:08:02.779 "is_configured": true, 00:08:02.779 "data_offset": 0, 00:08:02.779 "data_size": 65536 00:08:02.779 }, 00:08:02.779 { 00:08:02.779 "name": "BaseBdev2", 00:08:02.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.779 "is_configured": false, 00:08:02.779 "data_offset": 0, 00:08:02.779 "data_size": 0 00:08:02.779 } 00:08:02.779 ] 00:08:02.779 }' 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.779 18:05:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.348 [2024-12-06 18:05:15.368094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.348 [2024-12-06 18:05:15.368260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:03.348 [2024-12-06 18:05:15.368291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:03.348 [2024-12-06 18:05:15.368654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:03.348 [2024-12-06 18:05:15.368933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:03.348 [2024-12-06 18:05:15.368996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:08:03.348 id_bdev 0x617000007e80 00:08:03.348 [2024-12-06 18:05:15.369450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.348 [ 00:08:03.348 { 00:08:03.348 "name": "BaseBdev2", 00:08:03.348 "aliases": [ 00:08:03.348 "54bd9bde-667f-4b8d-bb2e-79ce33cd002b" 00:08:03.348 ], 00:08:03.348 "product_name": "Malloc disk", 00:08:03.348 "block_size": 512, 00:08:03.348 "num_blocks": 65536, 00:08:03.348 "uuid": "54bd9bde-667f-4b8d-bb2e-79ce33cd002b", 00:08:03.348 "assigned_rate_limits": { 00:08:03.348 "rw_ios_per_sec": 0, 00:08:03.348 "rw_mbytes_per_sec": 0, 00:08:03.348 "r_mbytes_per_sec": 0, 00:08:03.348 "w_mbytes_per_sec": 0 00:08:03.348 }, 00:08:03.348 "claimed": true, 00:08:03.348 "claim_type": "exclusive_write", 00:08:03.348 "zoned": false, 00:08:03.348 "supported_io_types": { 00:08:03.348 "read": true, 00:08:03.348 "write": true, 00:08:03.348 "unmap": true, 00:08:03.348 "flush": true, 00:08:03.348 "reset": true, 00:08:03.348 "nvme_admin": false, 00:08:03.348 "nvme_io": false, 00:08:03.348 "nvme_io_md": false, 00:08:03.348 "write_zeroes": true, 00:08:03.348 "zcopy": true, 00:08:03.348 "get_zone_info": false, 00:08:03.348 "zone_management": false, 00:08:03.348 "zone_append": false, 00:08:03.348 "compare": false, 00:08:03.348 "compare_and_write": false, 00:08:03.348 "abort": true, 00:08:03.348 "seek_hole": false, 00:08:03.348 "seek_data": false, 00:08:03.348 "copy": true, 00:08:03.348 "nvme_iov_md": false 00:08:03.348 }, 00:08:03.348 "memory_domains": [ 00:08:03.348 { 00:08:03.348 "dma_device_id": "system", 00:08:03.348 "dma_device_type": 1 00:08:03.348 }, 00:08:03.348 { 00:08:03.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.348 "dma_device_type": 2 00:08:03.348 } 00:08:03.348 ], 00:08:03.348 "driver_specific": {} 00:08:03.348 } 00:08:03.348 ] 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.348 "name": "Existed_Raid", 00:08:03.348 "uuid": "07366508-782f-4f55-8f77-1f825333c5c1", 00:08:03.348 "strip_size_kb": 64, 00:08:03.348 "state": "online", 00:08:03.348 "raid_level": "concat", 00:08:03.348 "superblock": false, 00:08:03.348 "num_base_bdevs": 2, 00:08:03.348 "num_base_bdevs_discovered": 2, 00:08:03.348 "num_base_bdevs_operational": 2, 00:08:03.348 "base_bdevs_list": [ 00:08:03.348 { 00:08:03.348 "name": "BaseBdev1", 00:08:03.348 "uuid": "3b046018-1054-4e27-8ad5-f006550844d2", 00:08:03.348 "is_configured": true, 00:08:03.348 "data_offset": 0, 00:08:03.348 "data_size": 65536 00:08:03.348 }, 00:08:03.348 { 00:08:03.348 "name": "BaseBdev2", 00:08:03.348 "uuid": "54bd9bde-667f-4b8d-bb2e-79ce33cd002b", 00:08:03.348 "is_configured": true, 00:08:03.348 "data_offset": 0, 00:08:03.348 "data_size": 65536 00:08:03.348 } 00:08:03.348 ] 00:08:03.348 }' 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.348 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.916 [2024-12-06 18:05:15.892012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.916 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.916 "name": "Existed_Raid", 00:08:03.916 "aliases": [ 00:08:03.916 "07366508-782f-4f55-8f77-1f825333c5c1" 00:08:03.916 ], 00:08:03.916 "product_name": "Raid Volume", 00:08:03.916 "block_size": 512, 00:08:03.916 "num_blocks": 131072, 00:08:03.916 "uuid": "07366508-782f-4f55-8f77-1f825333c5c1", 00:08:03.916 "assigned_rate_limits": { 00:08:03.916 "rw_ios_per_sec": 0, 00:08:03.916 "rw_mbytes_per_sec": 0, 00:08:03.916 "r_mbytes_per_sec": 0, 00:08:03.916 "w_mbytes_per_sec": 0 00:08:03.916 }, 00:08:03.916 "claimed": false, 00:08:03.916 "zoned": false, 00:08:03.916 "supported_io_types": { 00:08:03.916 "read": true, 00:08:03.916 "write": true, 00:08:03.916 "unmap": true, 00:08:03.916 "flush": true, 00:08:03.916 "reset": true, 00:08:03.916 "nvme_admin": false, 00:08:03.916 "nvme_io": false, 00:08:03.916 "nvme_io_md": false, 00:08:03.916 "write_zeroes": true, 00:08:03.916 "zcopy": false, 00:08:03.916 "get_zone_info": false, 00:08:03.916 "zone_management": false, 00:08:03.916 "zone_append": false, 00:08:03.916 "compare": false, 00:08:03.916 "compare_and_write": false, 00:08:03.916 "abort": false, 00:08:03.916 "seek_hole": false, 00:08:03.916 "seek_data": false, 00:08:03.916 "copy": false, 00:08:03.916 "nvme_iov_md": false 00:08:03.916 }, 00:08:03.916 "memory_domains": [ 00:08:03.916 { 00:08:03.916 "dma_device_id": "system", 00:08:03.916 "dma_device_type": 1 00:08:03.916 }, 00:08:03.916 { 00:08:03.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.916 "dma_device_type": 2 00:08:03.916 }, 00:08:03.916 { 00:08:03.916 "dma_device_id": "system", 00:08:03.916 "dma_device_type": 1 00:08:03.916 }, 00:08:03.916 { 00:08:03.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.916 "dma_device_type": 2 00:08:03.916 } 00:08:03.916 ], 00:08:03.916 "driver_specific": { 00:08:03.916 "raid": { 00:08:03.916 "uuid": "07366508-782f-4f55-8f77-1f825333c5c1", 00:08:03.916 "strip_size_kb": 64, 00:08:03.916 "state": "online", 00:08:03.916 "raid_level": "concat", 00:08:03.916 "superblock": false, 00:08:03.916 "num_base_bdevs": 2, 00:08:03.916 "num_base_bdevs_discovered": 2, 00:08:03.916 "num_base_bdevs_operational": 2, 00:08:03.916 "base_bdevs_list": [ 00:08:03.916 { 00:08:03.916 "name": "BaseBdev1", 00:08:03.916 "uuid": "3b046018-1054-4e27-8ad5-f006550844d2", 00:08:03.916 "is_configured": true, 00:08:03.916 "data_offset": 0, 00:08:03.916 "data_size": 65536 00:08:03.916 }, 00:08:03.916 { 00:08:03.916 "name": "BaseBdev2", 00:08:03.916 "uuid": "54bd9bde-667f-4b8d-bb2e-79ce33cd002b", 00:08:03.916 "is_configured": true, 00:08:03.916 "data_offset": 0, 00:08:03.916 "data_size": 65536 00:08:03.916 } 00:08:03.916 ] 00:08:03.916 } 00:08:03.916 } 00:08:03.916 }' 00:08:03.917 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.917 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:03.917 BaseBdev2' 00:08:03.917 18:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.917 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.917 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.917 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:03.917 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.917 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.917 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.917 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.175 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.176 [2024-12-06 18:05:16.139757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:04.176 [2024-12-06 18:05:16.139889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.176 [2024-12-06 18:05:16.139989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.176 "name": "Existed_Raid", 00:08:04.176 "uuid": "07366508-782f-4f55-8f77-1f825333c5c1", 00:08:04.176 "strip_size_kb": 64, 00:08:04.176 "state": "offline", 00:08:04.176 "raid_level": "concat", 00:08:04.176 "superblock": false, 00:08:04.176 "num_base_bdevs": 2, 00:08:04.176 "num_base_bdevs_discovered": 1, 00:08:04.176 "num_base_bdevs_operational": 1, 00:08:04.176 "base_bdevs_list": [ 00:08:04.176 { 00:08:04.176 "name": null, 00:08:04.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.176 "is_configured": false, 00:08:04.176 "data_offset": 0, 00:08:04.176 "data_size": 65536 00:08:04.176 }, 00:08:04.176 { 00:08:04.176 "name": "BaseBdev2", 00:08:04.176 "uuid": "54bd9bde-667f-4b8d-bb2e-79ce33cd002b", 00:08:04.176 "is_configured": true, 00:08:04.176 "data_offset": 0, 00:08:04.176 "data_size": 65536 00:08:04.176 } 00:08:04.176 ] 00:08:04.176 }' 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.176 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.742 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:04.742 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.742 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.742 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.743 [2024-12-06 18:05:16.773436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.743 [2024-12-06 18:05:16.773503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:04.743 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62088 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62088 ']' 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62088 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62088 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62088' 00:08:05.019 killing process with pid 62088 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62088 00:08:05.019 18:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62088 00:08:05.019 [2024-12-06 18:05:16.983005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.019 [2024-12-06 18:05:17.003519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.398 ************************************ 00:08:06.398 END TEST raid_state_function_test 00:08:06.398 ************************************ 00:08:06.398 00:08:06.398 real 0m5.504s 00:08:06.398 user 0m7.884s 00:08:06.398 sys 0m0.833s 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.398 18:05:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:06.398 18:05:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.398 18:05:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.398 18:05:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.398 ************************************ 00:08:06.398 START TEST raid_state_function_test_sb 00:08:06.398 ************************************ 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:06.398 Process raid pid: 62347 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62347 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62347' 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62347 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62347 ']' 00:08:06.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.398 18:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.398 [2024-12-06 18:05:18.483571] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:06.398 [2024-12-06 18:05:18.483716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.657 [2024-12-06 18:05:18.677373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.657 [2024-12-06 18:05:18.814588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.917 [2024-12-06 18:05:19.063881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.917 [2024-12-06 18:05:19.063938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.485 [2024-12-06 18:05:19.405921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.485 [2024-12-06 18:05:19.406001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.485 [2024-12-06 18:05:19.406015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.485 [2024-12-06 18:05:19.406027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.485 "name": "Existed_Raid", 00:08:07.485 "uuid": "1f66ecad-b3a2-4924-8248-1c73b9e0269a", 00:08:07.485 "strip_size_kb": 64, 00:08:07.485 "state": "configuring", 00:08:07.485 "raid_level": "concat", 00:08:07.485 "superblock": true, 00:08:07.485 "num_base_bdevs": 2, 00:08:07.485 "num_base_bdevs_discovered": 0, 00:08:07.485 "num_base_bdevs_operational": 2, 00:08:07.485 "base_bdevs_list": [ 00:08:07.485 { 00:08:07.485 "name": "BaseBdev1", 00:08:07.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.485 "is_configured": false, 00:08:07.485 "data_offset": 0, 00:08:07.485 "data_size": 0 00:08:07.485 }, 00:08:07.485 { 00:08:07.485 "name": "BaseBdev2", 00:08:07.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.485 "is_configured": false, 00:08:07.485 "data_offset": 0, 00:08:07.485 "data_size": 0 00:08:07.485 } 00:08:07.485 ] 00:08:07.485 }' 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.485 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.744 [2024-12-06 18:05:19.849175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.744 [2024-12-06 18:05:19.849331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.744 [2024-12-06 18:05:19.861182] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.744 [2024-12-06 18:05:19.861254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.744 [2024-12-06 18:05:19.861265] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.744 [2024-12-06 18:05:19.861278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.744 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.002 [2024-12-06 18:05:19.913651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.002 BaseBdev1 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.002 [ 00:08:08.002 { 00:08:08.002 "name": "BaseBdev1", 00:08:08.002 "aliases": [ 00:08:08.002 "093fa908-afe7-4b8b-bb4d-f0b1ecff7d3b" 00:08:08.002 ], 00:08:08.002 "product_name": "Malloc disk", 00:08:08.002 "block_size": 512, 00:08:08.002 "num_blocks": 65536, 00:08:08.002 "uuid": "093fa908-afe7-4b8b-bb4d-f0b1ecff7d3b", 00:08:08.002 "assigned_rate_limits": { 00:08:08.002 "rw_ios_per_sec": 0, 00:08:08.002 "rw_mbytes_per_sec": 0, 00:08:08.002 "r_mbytes_per_sec": 0, 00:08:08.002 "w_mbytes_per_sec": 0 00:08:08.002 }, 00:08:08.002 "claimed": true, 00:08:08.002 "claim_type": "exclusive_write", 00:08:08.002 "zoned": false, 00:08:08.002 "supported_io_types": { 00:08:08.002 "read": true, 00:08:08.002 "write": true, 00:08:08.002 "unmap": true, 00:08:08.002 "flush": true, 00:08:08.002 "reset": true, 00:08:08.002 "nvme_admin": false, 00:08:08.002 "nvme_io": false, 00:08:08.002 "nvme_io_md": false, 00:08:08.002 "write_zeroes": true, 00:08:08.002 "zcopy": true, 00:08:08.002 "get_zone_info": false, 00:08:08.002 "zone_management": false, 00:08:08.002 "zone_append": false, 00:08:08.002 "compare": false, 00:08:08.002 "compare_and_write": false, 00:08:08.002 "abort": true, 00:08:08.002 "seek_hole": false, 00:08:08.002 "seek_data": false, 00:08:08.002 "copy": true, 00:08:08.002 "nvme_iov_md": false 00:08:08.002 }, 00:08:08.002 "memory_domains": [ 00:08:08.002 { 00:08:08.002 "dma_device_id": "system", 00:08:08.002 "dma_device_type": 1 00:08:08.002 }, 00:08:08.002 { 00:08:08.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.002 "dma_device_type": 2 00:08:08.002 } 00:08:08.002 ], 00:08:08.002 "driver_specific": {} 00:08:08.002 } 00:08:08.002 ] 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:08.002 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.003 "name": "Existed_Raid", 00:08:08.003 "uuid": "64b7b25f-5791-4c8e-bb2e-a413668bf983", 00:08:08.003 "strip_size_kb": 64, 00:08:08.003 "state": "configuring", 00:08:08.003 "raid_level": "concat", 00:08:08.003 "superblock": true, 00:08:08.003 "num_base_bdevs": 2, 00:08:08.003 "num_base_bdevs_discovered": 1, 00:08:08.003 "num_base_bdevs_operational": 2, 00:08:08.003 "base_bdevs_list": [ 00:08:08.003 { 00:08:08.003 "name": "BaseBdev1", 00:08:08.003 "uuid": "093fa908-afe7-4b8b-bb4d-f0b1ecff7d3b", 00:08:08.003 "is_configured": true, 00:08:08.003 "data_offset": 2048, 00:08:08.003 "data_size": 63488 00:08:08.003 }, 00:08:08.003 { 00:08:08.003 "name": "BaseBdev2", 00:08:08.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.003 "is_configured": false, 00:08:08.003 "data_offset": 0, 00:08:08.003 "data_size": 0 00:08:08.003 } 00:08:08.003 ] 00:08:08.003 }' 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.003 18:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.261 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.261 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.261 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.261 [2024-12-06 18:05:20.416918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.262 [2024-12-06 18:05:20.417091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:08.262 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.262 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.262 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.262 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.521 [2024-12-06 18:05:20.429010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.521 [2024-12-06 18:05:20.431305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.521 [2024-12-06 18:05:20.431414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.521 "name": "Existed_Raid", 00:08:08.521 "uuid": "e02f08b1-ecfc-4593-91c7-a85f327185a8", 00:08:08.521 "strip_size_kb": 64, 00:08:08.521 "state": "configuring", 00:08:08.521 "raid_level": "concat", 00:08:08.521 "superblock": true, 00:08:08.521 "num_base_bdevs": 2, 00:08:08.521 "num_base_bdevs_discovered": 1, 00:08:08.521 "num_base_bdevs_operational": 2, 00:08:08.521 "base_bdevs_list": [ 00:08:08.521 { 00:08:08.521 "name": "BaseBdev1", 00:08:08.521 "uuid": "093fa908-afe7-4b8b-bb4d-f0b1ecff7d3b", 00:08:08.521 "is_configured": true, 00:08:08.521 "data_offset": 2048, 00:08:08.521 "data_size": 63488 00:08:08.521 }, 00:08:08.521 { 00:08:08.521 "name": "BaseBdev2", 00:08:08.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.521 "is_configured": false, 00:08:08.521 "data_offset": 0, 00:08:08.521 "data_size": 0 00:08:08.521 } 00:08:08.521 ] 00:08:08.521 }' 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.521 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.781 [2024-12-06 18:05:20.935038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.781 [2024-12-06 18:05:20.935487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:08.781 [2024-12-06 18:05:20.935566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:08.781 [2024-12-06 18:05:20.935903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:08.781 [2024-12-06 18:05:20.936145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:08.781 [2024-12-06 18:05:20.936203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:08.781 BaseBdev2 00:08:08.781 [2024-12-06 18:05:20.936420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.781 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.041 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.041 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.041 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.041 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.041 [ 00:08:09.041 { 00:08:09.041 "name": "BaseBdev2", 00:08:09.041 "aliases": [ 00:08:09.041 "7b458cb9-e3c7-462a-ac67-ccaf1a858d22" 00:08:09.041 ], 00:08:09.041 "product_name": "Malloc disk", 00:08:09.041 "block_size": 512, 00:08:09.041 "num_blocks": 65536, 00:08:09.041 "uuid": "7b458cb9-e3c7-462a-ac67-ccaf1a858d22", 00:08:09.041 "assigned_rate_limits": { 00:08:09.041 "rw_ios_per_sec": 0, 00:08:09.041 "rw_mbytes_per_sec": 0, 00:08:09.041 "r_mbytes_per_sec": 0, 00:08:09.041 "w_mbytes_per_sec": 0 00:08:09.041 }, 00:08:09.041 "claimed": true, 00:08:09.041 "claim_type": "exclusive_write", 00:08:09.041 "zoned": false, 00:08:09.041 "supported_io_types": { 00:08:09.042 "read": true, 00:08:09.042 "write": true, 00:08:09.042 "unmap": true, 00:08:09.042 "flush": true, 00:08:09.042 "reset": true, 00:08:09.042 "nvme_admin": false, 00:08:09.042 "nvme_io": false, 00:08:09.042 "nvme_io_md": false, 00:08:09.042 "write_zeroes": true, 00:08:09.042 "zcopy": true, 00:08:09.042 "get_zone_info": false, 00:08:09.042 "zone_management": false, 00:08:09.042 "zone_append": false, 00:08:09.042 "compare": false, 00:08:09.042 "compare_and_write": false, 00:08:09.042 "abort": true, 00:08:09.042 "seek_hole": false, 00:08:09.042 "seek_data": false, 00:08:09.042 "copy": true, 00:08:09.042 "nvme_iov_md": false 00:08:09.042 }, 00:08:09.042 "memory_domains": [ 00:08:09.042 { 00:08:09.042 "dma_device_id": "system", 00:08:09.042 "dma_device_type": 1 00:08:09.042 }, 00:08:09.042 { 00:08:09.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.042 "dma_device_type": 2 00:08:09.042 } 00:08:09.042 ], 00:08:09.042 "driver_specific": {} 00:08:09.042 } 00:08:09.042 ] 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.042 18:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.042 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.042 "name": "Existed_Raid", 00:08:09.042 "uuid": "e02f08b1-ecfc-4593-91c7-a85f327185a8", 00:08:09.042 "strip_size_kb": 64, 00:08:09.042 "state": "online", 00:08:09.042 "raid_level": "concat", 00:08:09.042 "superblock": true, 00:08:09.042 "num_base_bdevs": 2, 00:08:09.042 "num_base_bdevs_discovered": 2, 00:08:09.042 "num_base_bdevs_operational": 2, 00:08:09.042 "base_bdevs_list": [ 00:08:09.042 { 00:08:09.042 "name": "BaseBdev1", 00:08:09.042 "uuid": "093fa908-afe7-4b8b-bb4d-f0b1ecff7d3b", 00:08:09.042 "is_configured": true, 00:08:09.042 "data_offset": 2048, 00:08:09.042 "data_size": 63488 00:08:09.042 }, 00:08:09.042 { 00:08:09.042 "name": "BaseBdev2", 00:08:09.042 "uuid": "7b458cb9-e3c7-462a-ac67-ccaf1a858d22", 00:08:09.042 "is_configured": true, 00:08:09.042 "data_offset": 2048, 00:08:09.042 "data_size": 63488 00:08:09.042 } 00:08:09.042 ] 00:08:09.042 }' 00:08:09.042 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.042 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.302 [2024-12-06 18:05:21.426644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.302 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.302 "name": "Existed_Raid", 00:08:09.302 "aliases": [ 00:08:09.302 "e02f08b1-ecfc-4593-91c7-a85f327185a8" 00:08:09.302 ], 00:08:09.302 "product_name": "Raid Volume", 00:08:09.302 "block_size": 512, 00:08:09.302 "num_blocks": 126976, 00:08:09.302 "uuid": "e02f08b1-ecfc-4593-91c7-a85f327185a8", 00:08:09.302 "assigned_rate_limits": { 00:08:09.302 "rw_ios_per_sec": 0, 00:08:09.302 "rw_mbytes_per_sec": 0, 00:08:09.302 "r_mbytes_per_sec": 0, 00:08:09.302 "w_mbytes_per_sec": 0 00:08:09.302 }, 00:08:09.302 "claimed": false, 00:08:09.302 "zoned": false, 00:08:09.302 "supported_io_types": { 00:08:09.302 "read": true, 00:08:09.302 "write": true, 00:08:09.302 "unmap": true, 00:08:09.302 "flush": true, 00:08:09.302 "reset": true, 00:08:09.302 "nvme_admin": false, 00:08:09.302 "nvme_io": false, 00:08:09.302 "nvme_io_md": false, 00:08:09.302 "write_zeroes": true, 00:08:09.302 "zcopy": false, 00:08:09.302 "get_zone_info": false, 00:08:09.302 "zone_management": false, 00:08:09.302 "zone_append": false, 00:08:09.302 "compare": false, 00:08:09.302 "compare_and_write": false, 00:08:09.302 "abort": false, 00:08:09.302 "seek_hole": false, 00:08:09.302 "seek_data": false, 00:08:09.302 "copy": false, 00:08:09.302 "nvme_iov_md": false 00:08:09.302 }, 00:08:09.302 "memory_domains": [ 00:08:09.302 { 00:08:09.302 "dma_device_id": "system", 00:08:09.302 "dma_device_type": 1 00:08:09.302 }, 00:08:09.302 { 00:08:09.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.303 "dma_device_type": 2 00:08:09.303 }, 00:08:09.303 { 00:08:09.303 "dma_device_id": "system", 00:08:09.303 "dma_device_type": 1 00:08:09.303 }, 00:08:09.303 { 00:08:09.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.303 "dma_device_type": 2 00:08:09.303 } 00:08:09.303 ], 00:08:09.303 "driver_specific": { 00:08:09.303 "raid": { 00:08:09.303 "uuid": "e02f08b1-ecfc-4593-91c7-a85f327185a8", 00:08:09.303 "strip_size_kb": 64, 00:08:09.303 "state": "online", 00:08:09.303 "raid_level": "concat", 00:08:09.303 "superblock": true, 00:08:09.303 "num_base_bdevs": 2, 00:08:09.303 "num_base_bdevs_discovered": 2, 00:08:09.303 "num_base_bdevs_operational": 2, 00:08:09.303 "base_bdevs_list": [ 00:08:09.303 { 00:08:09.303 "name": "BaseBdev1", 00:08:09.303 "uuid": "093fa908-afe7-4b8b-bb4d-f0b1ecff7d3b", 00:08:09.303 "is_configured": true, 00:08:09.303 "data_offset": 2048, 00:08:09.303 "data_size": 63488 00:08:09.303 }, 00:08:09.303 { 00:08:09.303 "name": "BaseBdev2", 00:08:09.303 "uuid": "7b458cb9-e3c7-462a-ac67-ccaf1a858d22", 00:08:09.303 "is_configured": true, 00:08:09.303 "data_offset": 2048, 00:08:09.303 "data_size": 63488 00:08:09.303 } 00:08:09.303 ] 00:08:09.303 } 00:08:09.303 } 00:08:09.303 }' 00:08:09.303 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:09.563 BaseBdev2' 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.563 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.563 [2024-12-06 18:05:21.658019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.563 [2024-12-06 18:05:21.658140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.563 [2024-12-06 18:05:21.658233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:09.825 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.826 "name": "Existed_Raid", 00:08:09.826 "uuid": "e02f08b1-ecfc-4593-91c7-a85f327185a8", 00:08:09.826 "strip_size_kb": 64, 00:08:09.826 "state": "offline", 00:08:09.826 "raid_level": "concat", 00:08:09.826 "superblock": true, 00:08:09.826 "num_base_bdevs": 2, 00:08:09.826 "num_base_bdevs_discovered": 1, 00:08:09.826 "num_base_bdevs_operational": 1, 00:08:09.826 "base_bdevs_list": [ 00:08:09.826 { 00:08:09.826 "name": null, 00:08:09.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.826 "is_configured": false, 00:08:09.826 "data_offset": 0, 00:08:09.826 "data_size": 63488 00:08:09.826 }, 00:08:09.826 { 00:08:09.826 "name": "BaseBdev2", 00:08:09.826 "uuid": "7b458cb9-e3c7-462a-ac67-ccaf1a858d22", 00:08:09.826 "is_configured": true, 00:08:09.826 "data_offset": 2048, 00:08:09.826 "data_size": 63488 00:08:09.826 } 00:08:09.826 ] 00:08:09.826 }' 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.826 18:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.400 [2024-12-06 18:05:22.316327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.400 [2024-12-06 18:05:22.316421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62347 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62347 ']' 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62347 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62347 00:08:10.400 killing process with pid 62347 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62347' 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62347 00:08:10.400 [2024-12-06 18:05:22.513116] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.400 18:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62347 00:08:10.400 [2024-12-06 18:05:22.533653] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.791 18:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:11.791 00:08:11.791 real 0m5.434s 00:08:11.791 user 0m7.830s 00:08:11.791 sys 0m0.769s 00:08:11.791 18:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.791 18:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.791 ************************************ 00:08:11.791 END TEST raid_state_function_test_sb 00:08:11.791 ************************************ 00:08:11.791 18:05:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:11.791 18:05:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:11.791 18:05:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.791 18:05:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.791 ************************************ 00:08:11.791 START TEST raid_superblock_test 00:08:11.791 ************************************ 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62599 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62599 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62599 ']' 00:08:11.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.791 18:05:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.051 [2024-12-06 18:05:23.991411] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:12.051 [2024-12-06 18:05:23.991685] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62599 ] 00:08:12.051 [2024-12-06 18:05:24.172162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.310 [2024-12-06 18:05:24.311856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.570 [2024-12-06 18:05:24.546827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.570 [2024-12-06 18:05:24.547006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.830 malloc1 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.830 [2024-12-06 18:05:24.966394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.830 [2024-12-06 18:05:24.966487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.830 [2024-12-06 18:05:24.966517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:12.830 [2024-12-06 18:05:24.966530] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.830 [2024-12-06 18:05:24.969197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.830 [2024-12-06 18:05:24.969340] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.830 pt1 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.830 18:05:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.088 malloc2 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.088 [2024-12-06 18:05:25.031916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.088 [2024-12-06 18:05:25.032111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.088 [2024-12-06 18:05:25.032173] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:13.088 [2024-12-06 18:05:25.032213] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.088 [2024-12-06 18:05:25.034888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.088 [2024-12-06 18:05:25.035010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.088 pt2 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.088 [2024-12-06 18:05:25.044059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.088 [2024-12-06 18:05:25.046377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.088 [2024-12-06 18:05:25.046661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:13.088 [2024-12-06 18:05:25.046718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:13.088 [2024-12-06 18:05:25.047121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:13.088 [2024-12-06 18:05:25.047375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:13.088 [2024-12-06 18:05:25.047426] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:13.088 [2024-12-06 18:05:25.047711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.088 "name": "raid_bdev1", 00:08:13.088 "uuid": "6c1b4bb0-c76a-4210-9d55-295c2dc42cd8", 00:08:13.088 "strip_size_kb": 64, 00:08:13.088 "state": "online", 00:08:13.088 "raid_level": "concat", 00:08:13.088 "superblock": true, 00:08:13.088 "num_base_bdevs": 2, 00:08:13.088 "num_base_bdevs_discovered": 2, 00:08:13.088 "num_base_bdevs_operational": 2, 00:08:13.088 "base_bdevs_list": [ 00:08:13.088 { 00:08:13.088 "name": "pt1", 00:08:13.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.088 "is_configured": true, 00:08:13.088 "data_offset": 2048, 00:08:13.088 "data_size": 63488 00:08:13.088 }, 00:08:13.088 { 00:08:13.088 "name": "pt2", 00:08:13.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.088 "is_configured": true, 00:08:13.088 "data_offset": 2048, 00:08:13.088 "data_size": 63488 00:08:13.088 } 00:08:13.088 ] 00:08:13.088 }' 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.088 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.346 [2024-12-06 18:05:25.500007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.346 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.634 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.634 "name": "raid_bdev1", 00:08:13.634 "aliases": [ 00:08:13.634 "6c1b4bb0-c76a-4210-9d55-295c2dc42cd8" 00:08:13.634 ], 00:08:13.634 "product_name": "Raid Volume", 00:08:13.634 "block_size": 512, 00:08:13.634 "num_blocks": 126976, 00:08:13.634 "uuid": "6c1b4bb0-c76a-4210-9d55-295c2dc42cd8", 00:08:13.634 "assigned_rate_limits": { 00:08:13.634 "rw_ios_per_sec": 0, 00:08:13.634 "rw_mbytes_per_sec": 0, 00:08:13.634 "r_mbytes_per_sec": 0, 00:08:13.634 "w_mbytes_per_sec": 0 00:08:13.634 }, 00:08:13.634 "claimed": false, 00:08:13.634 "zoned": false, 00:08:13.634 "supported_io_types": { 00:08:13.634 "read": true, 00:08:13.634 "write": true, 00:08:13.634 "unmap": true, 00:08:13.634 "flush": true, 00:08:13.634 "reset": true, 00:08:13.634 "nvme_admin": false, 00:08:13.634 "nvme_io": false, 00:08:13.634 "nvme_io_md": false, 00:08:13.634 "write_zeroes": true, 00:08:13.634 "zcopy": false, 00:08:13.634 "get_zone_info": false, 00:08:13.634 "zone_management": false, 00:08:13.634 "zone_append": false, 00:08:13.634 "compare": false, 00:08:13.634 "compare_and_write": false, 00:08:13.634 "abort": false, 00:08:13.634 "seek_hole": false, 00:08:13.634 "seek_data": false, 00:08:13.634 "copy": false, 00:08:13.634 "nvme_iov_md": false 00:08:13.634 }, 00:08:13.634 "memory_domains": [ 00:08:13.634 { 00:08:13.634 "dma_device_id": "system", 00:08:13.634 "dma_device_type": 1 00:08:13.634 }, 00:08:13.634 { 00:08:13.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.634 "dma_device_type": 2 00:08:13.634 }, 00:08:13.634 { 00:08:13.634 "dma_device_id": "system", 00:08:13.634 "dma_device_type": 1 00:08:13.634 }, 00:08:13.634 { 00:08:13.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.634 "dma_device_type": 2 00:08:13.634 } 00:08:13.634 ], 00:08:13.634 "driver_specific": { 00:08:13.634 "raid": { 00:08:13.634 "uuid": "6c1b4bb0-c76a-4210-9d55-295c2dc42cd8", 00:08:13.634 "strip_size_kb": 64, 00:08:13.634 "state": "online", 00:08:13.634 "raid_level": "concat", 00:08:13.634 "superblock": true, 00:08:13.634 "num_base_bdevs": 2, 00:08:13.634 "num_base_bdevs_discovered": 2, 00:08:13.634 "num_base_bdevs_operational": 2, 00:08:13.634 "base_bdevs_list": [ 00:08:13.634 { 00:08:13.634 "name": "pt1", 00:08:13.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.634 "is_configured": true, 00:08:13.634 "data_offset": 2048, 00:08:13.634 "data_size": 63488 00:08:13.634 }, 00:08:13.634 { 00:08:13.634 "name": "pt2", 00:08:13.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.634 "is_configured": true, 00:08:13.634 "data_offset": 2048, 00:08:13.634 "data_size": 63488 00:08:13.634 } 00:08:13.634 ] 00:08:13.634 } 00:08:13.634 } 00:08:13.634 }' 00:08:13.634 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.634 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.634 pt2' 00:08:13.634 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.634 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.634 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.634 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.634 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.634 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:13.635 [2024-12-06 18:05:25.743993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6c1b4bb0-c76a-4210-9d55-295c2dc42cd8 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6c1b4bb0-c76a-4210-9d55-295c2dc42cd8 ']' 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.635 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.905 [2024-12-06 18:05:25.795662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.905 [2024-12-06 18:05:25.795709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.905 [2024-12-06 18:05:25.795822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.905 [2024-12-06 18:05:25.795881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.905 [2024-12-06 18:05:25.795896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.905 [2024-12-06 18:05:25.931727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:13.905 [2024-12-06 18:05:25.934117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:13.905 [2024-12-06 18:05:25.934279] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:13.905 [2024-12-06 18:05:25.934405] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:13.905 [2024-12-06 18:05:25.934466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.905 [2024-12-06 18:05:25.934531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:13.905 request: 00:08:13.905 { 00:08:13.905 "name": "raid_bdev1", 00:08:13.905 "raid_level": "concat", 00:08:13.905 "base_bdevs": [ 00:08:13.905 "malloc1", 00:08:13.905 "malloc2" 00:08:13.905 ], 00:08:13.905 "strip_size_kb": 64, 00:08:13.905 "superblock": false, 00:08:13.905 "method": "bdev_raid_create", 00:08:13.905 "req_id": 1 00:08:13.905 } 00:08:13.905 Got JSON-RPC error response 00:08:13.905 response: 00:08:13.905 { 00:08:13.905 "code": -17, 00:08:13.905 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:13.905 } 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:13.905 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.906 [2024-12-06 18:05:25.987706] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.906 [2024-12-06 18:05:25.987806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.906 [2024-12-06 18:05:25.987828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:13.906 [2024-12-06 18:05:25.987841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.906 [2024-12-06 18:05:25.990542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.906 [2024-12-06 18:05:25.990603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.906 [2024-12-06 18:05:25.990713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:13.906 [2024-12-06 18:05:25.990780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.906 pt1 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.906 18:05:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.906 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.906 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.906 "name": "raid_bdev1", 00:08:13.906 "uuid": "6c1b4bb0-c76a-4210-9d55-295c2dc42cd8", 00:08:13.906 "strip_size_kb": 64, 00:08:13.906 "state": "configuring", 00:08:13.906 "raid_level": "concat", 00:08:13.906 "superblock": true, 00:08:13.906 "num_base_bdevs": 2, 00:08:13.906 "num_base_bdevs_discovered": 1, 00:08:13.906 "num_base_bdevs_operational": 2, 00:08:13.906 "base_bdevs_list": [ 00:08:13.906 { 00:08:13.906 "name": "pt1", 00:08:13.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.906 "is_configured": true, 00:08:13.906 "data_offset": 2048, 00:08:13.906 "data_size": 63488 00:08:13.906 }, 00:08:13.906 { 00:08:13.906 "name": null, 00:08:13.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.906 "is_configured": false, 00:08:13.906 "data_offset": 2048, 00:08:13.906 "data_size": 63488 00:08:13.906 } 00:08:13.906 ] 00:08:13.906 }' 00:08:13.906 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.906 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.475 [2024-12-06 18:05:26.471248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.475 [2024-12-06 18:05:26.471346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.475 [2024-12-06 18:05:26.471373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:14.475 [2024-12-06 18:05:26.471387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.475 [2024-12-06 18:05:26.471943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.475 [2024-12-06 18:05:26.471969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.475 [2024-12-06 18:05:26.472093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.475 [2024-12-06 18:05:26.472129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.475 [2024-12-06 18:05:26.472282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:14.475 [2024-12-06 18:05:26.472297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:14.475 [2024-12-06 18:05:26.472586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:14.475 [2024-12-06 18:05:26.472764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:14.475 [2024-12-06 18:05:26.472775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:14.475 [2024-12-06 18:05:26.472953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.475 pt2 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.475 "name": "raid_bdev1", 00:08:14.475 "uuid": "6c1b4bb0-c76a-4210-9d55-295c2dc42cd8", 00:08:14.475 "strip_size_kb": 64, 00:08:14.475 "state": "online", 00:08:14.475 "raid_level": "concat", 00:08:14.475 "superblock": true, 00:08:14.475 "num_base_bdevs": 2, 00:08:14.475 "num_base_bdevs_discovered": 2, 00:08:14.475 "num_base_bdevs_operational": 2, 00:08:14.475 "base_bdevs_list": [ 00:08:14.475 { 00:08:14.475 "name": "pt1", 00:08:14.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.475 "is_configured": true, 00:08:14.475 "data_offset": 2048, 00:08:14.475 "data_size": 63488 00:08:14.475 }, 00:08:14.475 { 00:08:14.475 "name": "pt2", 00:08:14.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.475 "is_configured": true, 00:08:14.475 "data_offset": 2048, 00:08:14.475 "data_size": 63488 00:08:14.475 } 00:08:14.475 ] 00:08:14.475 }' 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.475 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.045 [2024-12-06 18:05:26.950696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.045 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.045 "name": "raid_bdev1", 00:08:15.045 "aliases": [ 00:08:15.045 "6c1b4bb0-c76a-4210-9d55-295c2dc42cd8" 00:08:15.045 ], 00:08:15.045 "product_name": "Raid Volume", 00:08:15.045 "block_size": 512, 00:08:15.045 "num_blocks": 126976, 00:08:15.045 "uuid": "6c1b4bb0-c76a-4210-9d55-295c2dc42cd8", 00:08:15.045 "assigned_rate_limits": { 00:08:15.045 "rw_ios_per_sec": 0, 00:08:15.045 "rw_mbytes_per_sec": 0, 00:08:15.045 "r_mbytes_per_sec": 0, 00:08:15.045 "w_mbytes_per_sec": 0 00:08:15.045 }, 00:08:15.045 "claimed": false, 00:08:15.045 "zoned": false, 00:08:15.045 "supported_io_types": { 00:08:15.045 "read": true, 00:08:15.045 "write": true, 00:08:15.045 "unmap": true, 00:08:15.045 "flush": true, 00:08:15.045 "reset": true, 00:08:15.045 "nvme_admin": false, 00:08:15.045 "nvme_io": false, 00:08:15.045 "nvme_io_md": false, 00:08:15.045 "write_zeroes": true, 00:08:15.045 "zcopy": false, 00:08:15.045 "get_zone_info": false, 00:08:15.045 "zone_management": false, 00:08:15.045 "zone_append": false, 00:08:15.045 "compare": false, 00:08:15.045 "compare_and_write": false, 00:08:15.045 "abort": false, 00:08:15.045 "seek_hole": false, 00:08:15.045 "seek_data": false, 00:08:15.045 "copy": false, 00:08:15.045 "nvme_iov_md": false 00:08:15.045 }, 00:08:15.045 "memory_domains": [ 00:08:15.045 { 00:08:15.045 "dma_device_id": "system", 00:08:15.045 "dma_device_type": 1 00:08:15.046 }, 00:08:15.046 { 00:08:15.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.046 "dma_device_type": 2 00:08:15.046 }, 00:08:15.046 { 00:08:15.046 "dma_device_id": "system", 00:08:15.046 "dma_device_type": 1 00:08:15.046 }, 00:08:15.046 { 00:08:15.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.046 "dma_device_type": 2 00:08:15.046 } 00:08:15.046 ], 00:08:15.046 "driver_specific": { 00:08:15.046 "raid": { 00:08:15.046 "uuid": "6c1b4bb0-c76a-4210-9d55-295c2dc42cd8", 00:08:15.046 "strip_size_kb": 64, 00:08:15.046 "state": "online", 00:08:15.046 "raid_level": "concat", 00:08:15.046 "superblock": true, 00:08:15.046 "num_base_bdevs": 2, 00:08:15.046 "num_base_bdevs_discovered": 2, 00:08:15.046 "num_base_bdevs_operational": 2, 00:08:15.046 "base_bdevs_list": [ 00:08:15.046 { 00:08:15.046 "name": "pt1", 00:08:15.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.046 "is_configured": true, 00:08:15.046 "data_offset": 2048, 00:08:15.046 "data_size": 63488 00:08:15.046 }, 00:08:15.046 { 00:08:15.046 "name": "pt2", 00:08:15.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.046 "is_configured": true, 00:08:15.046 "data_offset": 2048, 00:08:15.046 "data_size": 63488 00:08:15.046 } 00:08:15.046 ] 00:08:15.046 } 00:08:15.046 } 00:08:15.046 }' 00:08:15.046 18:05:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.046 pt2' 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.046 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.046 [2024-12-06 18:05:27.198293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6c1b4bb0-c76a-4210-9d55-295c2dc42cd8 '!=' 6c1b4bb0-c76a-4210-9d55-295c2dc42cd8 ']' 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62599 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62599 ']' 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62599 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:15.306 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.307 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62599 00:08:15.307 killing process with pid 62599 00:08:15.307 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.307 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.307 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62599' 00:08:15.307 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62599 00:08:15.307 [2024-12-06 18:05:27.279939] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.307 18:05:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62599 00:08:15.307 [2024-12-06 18:05:27.280053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.307 [2024-12-06 18:05:27.280123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.307 [2024-12-06 18:05:27.280141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:15.567 [2024-12-06 18:05:27.518746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.945 ************************************ 00:08:16.945 END TEST raid_superblock_test 00:08:16.945 ************************************ 00:08:16.945 18:05:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:16.945 00:08:16.945 real 0m4.819s 00:08:16.945 user 0m6.759s 00:08:16.945 sys 0m0.794s 00:08:16.945 18:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.945 18:05:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.945 18:05:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:16.945 18:05:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:16.945 18:05:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.945 18:05:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.945 ************************************ 00:08:16.945 START TEST raid_read_error_test 00:08:16.945 ************************************ 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.of7D4dmmGg 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62816 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62816 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62816 ']' 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.945 18:05:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.945 [2024-12-06 18:05:28.887737] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:16.945 [2024-12-06 18:05:28.888356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62816 ] 00:08:16.945 [2024-12-06 18:05:29.062097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.204 [2024-12-06 18:05:29.182919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.461 [2024-12-06 18:05:29.393401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.461 [2024-12-06 18:05:29.393547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.721 BaseBdev1_malloc 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.721 true 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.721 [2024-12-06 18:05:29.844844] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.721 [2024-12-06 18:05:29.844909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.721 [2024-12-06 18:05:29.844933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:17.721 [2024-12-06 18:05:29.844945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.721 [2024-12-06 18:05:29.847335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.721 [2024-12-06 18:05:29.847380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.721 BaseBdev1 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.721 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.980 BaseBdev2_malloc 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.980 true 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.980 [2024-12-06 18:05:29.905189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.980 [2024-12-06 18:05:29.905247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.980 [2024-12-06 18:05:29.905266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:17.980 [2024-12-06 18:05:29.905277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.980 [2024-12-06 18:05:29.907689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.980 [2024-12-06 18:05:29.907731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.980 BaseBdev2 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.980 [2024-12-06 18:05:29.913254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.980 [2024-12-06 18:05:29.915314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.980 [2024-12-06 18:05:29.915543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.980 [2024-12-06 18:05:29.915562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:17.980 [2024-12-06 18:05:29.915827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:17.980 [2024-12-06 18:05:29.916024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.980 [2024-12-06 18:05:29.916038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:17.980 [2024-12-06 18:05:29.916225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.980 "name": "raid_bdev1", 00:08:17.980 "uuid": "183b7356-8f6c-4b95-bb7a-d3965bed4f92", 00:08:17.980 "strip_size_kb": 64, 00:08:17.980 "state": "online", 00:08:17.980 "raid_level": "concat", 00:08:17.980 "superblock": true, 00:08:17.980 "num_base_bdevs": 2, 00:08:17.980 "num_base_bdevs_discovered": 2, 00:08:17.980 "num_base_bdevs_operational": 2, 00:08:17.980 "base_bdevs_list": [ 00:08:17.980 { 00:08:17.980 "name": "BaseBdev1", 00:08:17.980 "uuid": "415d5beb-6c53-5a47-9549-88c86adc6954", 00:08:17.980 "is_configured": true, 00:08:17.980 "data_offset": 2048, 00:08:17.980 "data_size": 63488 00:08:17.980 }, 00:08:17.980 { 00:08:17.980 "name": "BaseBdev2", 00:08:17.980 "uuid": "7cb96638-d025-5194-80c0-1cb277ffc9c9", 00:08:17.980 "is_configured": true, 00:08:17.980 "data_offset": 2048, 00:08:17.980 "data_size": 63488 00:08:17.980 } 00:08:17.980 ] 00:08:17.980 }' 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.980 18:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.240 18:05:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:18.240 18:05:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:18.498 [2024-12-06 18:05:30.449824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.434 "name": "raid_bdev1", 00:08:19.434 "uuid": "183b7356-8f6c-4b95-bb7a-d3965bed4f92", 00:08:19.434 "strip_size_kb": 64, 00:08:19.434 "state": "online", 00:08:19.434 "raid_level": "concat", 00:08:19.434 "superblock": true, 00:08:19.434 "num_base_bdevs": 2, 00:08:19.434 "num_base_bdevs_discovered": 2, 00:08:19.434 "num_base_bdevs_operational": 2, 00:08:19.434 "base_bdevs_list": [ 00:08:19.434 { 00:08:19.434 "name": "BaseBdev1", 00:08:19.434 "uuid": "415d5beb-6c53-5a47-9549-88c86adc6954", 00:08:19.434 "is_configured": true, 00:08:19.434 "data_offset": 2048, 00:08:19.434 "data_size": 63488 00:08:19.434 }, 00:08:19.434 { 00:08:19.434 "name": "BaseBdev2", 00:08:19.434 "uuid": "7cb96638-d025-5194-80c0-1cb277ffc9c9", 00:08:19.434 "is_configured": true, 00:08:19.434 "data_offset": 2048, 00:08:19.434 "data_size": 63488 00:08:19.434 } 00:08:19.434 ] 00:08:19.434 }' 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.434 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.692 [2024-12-06 18:05:31.802450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.692 [2024-12-06 18:05:31.802486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.692 [2024-12-06 18:05:31.805380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.692 [2024-12-06 18:05:31.805424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.692 [2024-12-06 18:05:31.805455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.692 [2024-12-06 18:05:31.805469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:19.692 { 00:08:19.692 "results": [ 00:08:19.692 { 00:08:19.692 "job": "raid_bdev1", 00:08:19.692 "core_mask": "0x1", 00:08:19.692 "workload": "randrw", 00:08:19.692 "percentage": 50, 00:08:19.692 "status": "finished", 00:08:19.692 "queue_depth": 1, 00:08:19.692 "io_size": 131072, 00:08:19.692 "runtime": 1.353253, 00:08:19.692 "iops": 14595.940300889783, 00:08:19.692 "mibps": 1824.4925376112228, 00:08:19.692 "io_failed": 1, 00:08:19.692 "io_timeout": 0, 00:08:19.692 "avg_latency_us": 94.62986025891375, 00:08:19.692 "min_latency_us": 27.276855895196505, 00:08:19.692 "max_latency_us": 1688.482096069869 00:08:19.692 } 00:08:19.692 ], 00:08:19.692 "core_count": 1 00:08:19.692 } 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62816 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62816 ']' 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62816 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62816 00:08:19.692 killing process with pid 62816 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62816' 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62816 00:08:19.692 18:05:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62816 00:08:19.692 [2024-12-06 18:05:31.845996] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.950 [2024-12-06 18:05:31.989423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.of7D4dmmGg 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:21.326 ************************************ 00:08:21.326 END TEST raid_read_error_test 00:08:21.326 ************************************ 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:21.326 00:08:21.326 real 0m4.502s 00:08:21.326 user 0m5.410s 00:08:21.326 sys 0m0.539s 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.326 18:05:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.326 18:05:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:21.326 18:05:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.326 18:05:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.326 18:05:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.326 ************************************ 00:08:21.326 START TEST raid_write_error_test 00:08:21.326 ************************************ 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wiEzGwKI2K 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62956 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62956 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62956 ']' 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.326 18:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.326 [2024-12-06 18:05:33.466048] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:21.326 [2024-12-06 18:05:33.466203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62956 ] 00:08:21.585 [2024-12-06 18:05:33.643890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.844 [2024-12-06 18:05:33.761892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.844 [2024-12-06 18:05:33.976977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.844 [2024-12-06 18:05:33.977044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.413 BaseBdev1_malloc 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.413 true 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.413 [2024-12-06 18:05:34.369657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.413 [2024-12-06 18:05:34.369711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.413 [2024-12-06 18:05:34.369730] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.413 [2024-12-06 18:05:34.369741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.413 [2024-12-06 18:05:34.371849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.413 [2024-12-06 18:05:34.371889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.413 BaseBdev1 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.413 BaseBdev2_malloc 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.413 true 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.413 [2024-12-06 18:05:34.426783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.413 [2024-12-06 18:05:34.426846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.413 [2024-12-06 18:05:34.426864] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:22.413 [2024-12-06 18:05:34.426876] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.413 [2024-12-06 18:05:34.429258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.413 [2024-12-06 18:05:34.429297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.413 BaseBdev2 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.413 [2024-12-06 18:05:34.434829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.413 [2024-12-06 18:05:34.436710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.413 [2024-12-06 18:05:34.436897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.413 [2024-12-06 18:05:34.436913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:22.413 [2024-12-06 18:05:34.437163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:22.413 [2024-12-06 18:05:34.437336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.413 [2024-12-06 18:05:34.437349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:22.413 [2024-12-06 18:05:34.437523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.413 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.414 "name": "raid_bdev1", 00:08:22.414 "uuid": "68dc1f1c-5648-4e55-bd9c-36372565708e", 00:08:22.414 "strip_size_kb": 64, 00:08:22.414 "state": "online", 00:08:22.414 "raid_level": "concat", 00:08:22.414 "superblock": true, 00:08:22.414 "num_base_bdevs": 2, 00:08:22.414 "num_base_bdevs_discovered": 2, 00:08:22.414 "num_base_bdevs_operational": 2, 00:08:22.414 "base_bdevs_list": [ 00:08:22.414 { 00:08:22.414 "name": "BaseBdev1", 00:08:22.414 "uuid": "6d184404-aee0-58f0-8029-35e75091c011", 00:08:22.414 "is_configured": true, 00:08:22.414 "data_offset": 2048, 00:08:22.414 "data_size": 63488 00:08:22.414 }, 00:08:22.414 { 00:08:22.414 "name": "BaseBdev2", 00:08:22.414 "uuid": "5a3ce780-5a47-53aa-818a-d84744ec2356", 00:08:22.414 "is_configured": true, 00:08:22.414 "data_offset": 2048, 00:08:22.414 "data_size": 63488 00:08:22.414 } 00:08:22.414 ] 00:08:22.414 }' 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.414 18:05:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.983 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:22.983 18:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:22.983 [2024-12-06 18:05:34.955475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.918 "name": "raid_bdev1", 00:08:23.918 "uuid": "68dc1f1c-5648-4e55-bd9c-36372565708e", 00:08:23.918 "strip_size_kb": 64, 00:08:23.918 "state": "online", 00:08:23.918 "raid_level": "concat", 00:08:23.918 "superblock": true, 00:08:23.918 "num_base_bdevs": 2, 00:08:23.918 "num_base_bdevs_discovered": 2, 00:08:23.918 "num_base_bdevs_operational": 2, 00:08:23.918 "base_bdevs_list": [ 00:08:23.918 { 00:08:23.918 "name": "BaseBdev1", 00:08:23.918 "uuid": "6d184404-aee0-58f0-8029-35e75091c011", 00:08:23.918 "is_configured": true, 00:08:23.918 "data_offset": 2048, 00:08:23.918 "data_size": 63488 00:08:23.918 }, 00:08:23.918 { 00:08:23.918 "name": "BaseBdev2", 00:08:23.918 "uuid": "5a3ce780-5a47-53aa-818a-d84744ec2356", 00:08:23.918 "is_configured": true, 00:08:23.918 "data_offset": 2048, 00:08:23.918 "data_size": 63488 00:08:23.918 } 00:08:23.918 ] 00:08:23.918 }' 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.918 18:05:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.178 [2024-12-06 18:05:36.323880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.178 [2024-12-06 18:05:36.323922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.178 [2024-12-06 18:05:36.326753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.178 [2024-12-06 18:05:36.326830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.178 [2024-12-06 18:05:36.326893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.178 [2024-12-06 18:05:36.326941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:24.178 { 00:08:24.178 "results": [ 00:08:24.178 { 00:08:24.178 "job": "raid_bdev1", 00:08:24.178 "core_mask": "0x1", 00:08:24.178 "workload": "randrw", 00:08:24.178 "percentage": 50, 00:08:24.178 "status": "finished", 00:08:24.178 "queue_depth": 1, 00:08:24.178 "io_size": 131072, 00:08:24.178 "runtime": 1.369096, 00:08:24.178 "iops": 15026.7037519648, 00:08:24.178 "mibps": 1878.3379689956, 00:08:24.178 "io_failed": 1, 00:08:24.178 "io_timeout": 0, 00:08:24.178 "avg_latency_us": 92.02965883510073, 00:08:24.178 "min_latency_us": 27.388646288209607, 00:08:24.178 "max_latency_us": 1352.216593886463 00:08:24.178 } 00:08:24.178 ], 00:08:24.178 "core_count": 1 00:08:24.178 } 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62956 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62956 ']' 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62956 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.178 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62956 00:08:24.437 killing process with pid 62956 00:08:24.437 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.437 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.437 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62956' 00:08:24.437 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62956 00:08:24.437 [2024-12-06 18:05:36.360932] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.437 18:05:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62956 00:08:24.437 [2024-12-06 18:05:36.508137] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.815 18:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wiEzGwKI2K 00:08:25.816 18:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:25.816 18:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:25.816 18:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:25.816 18:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:25.816 18:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.816 18:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:25.816 18:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:25.816 ************************************ 00:08:25.816 END TEST raid_write_error_test 00:08:25.816 ************************************ 00:08:25.816 00:08:25.816 real 0m4.439s 00:08:25.816 user 0m5.277s 00:08:25.816 sys 0m0.579s 00:08:25.816 18:05:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.816 18:05:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.816 18:05:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:25.816 18:05:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:25.816 18:05:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:25.816 18:05:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.816 18:05:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.816 ************************************ 00:08:25.816 START TEST raid_state_function_test 00:08:25.816 ************************************ 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63100 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63100' 00:08:25.816 Process raid pid: 63100 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63100 00:08:25.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63100 ']' 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.816 18:05:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.816 [2024-12-06 18:05:37.962165] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:25.816 [2024-12-06 18:05:37.962392] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.124 [2024-12-06 18:05:38.144467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.124 [2024-12-06 18:05:38.273932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.383 [2024-12-06 18:05:38.503435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.383 [2024-12-06 18:05:38.503609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.951 [2024-12-06 18:05:38.917125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.951 [2024-12-06 18:05:38.917190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.951 [2024-12-06 18:05:38.917203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.951 [2024-12-06 18:05:38.917215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.951 "name": "Existed_Raid", 00:08:26.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.951 "strip_size_kb": 0, 00:08:26.951 "state": "configuring", 00:08:26.951 "raid_level": "raid1", 00:08:26.951 "superblock": false, 00:08:26.951 "num_base_bdevs": 2, 00:08:26.951 "num_base_bdevs_discovered": 0, 00:08:26.951 "num_base_bdevs_operational": 2, 00:08:26.951 "base_bdevs_list": [ 00:08:26.951 { 00:08:26.951 "name": "BaseBdev1", 00:08:26.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.951 "is_configured": false, 00:08:26.951 "data_offset": 0, 00:08:26.951 "data_size": 0 00:08:26.951 }, 00:08:26.951 { 00:08:26.951 "name": "BaseBdev2", 00:08:26.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.951 "is_configured": false, 00:08:26.951 "data_offset": 0, 00:08:26.951 "data_size": 0 00:08:26.951 } 00:08:26.951 ] 00:08:26.951 }' 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.951 18:05:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.519 [2024-12-06 18:05:39.400287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.519 [2024-12-06 18:05:39.400340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.519 [2024-12-06 18:05:39.408283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.519 [2024-12-06 18:05:39.408344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.519 [2024-12-06 18:05:39.408356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.519 [2024-12-06 18:05:39.408370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.519 [2024-12-06 18:05:39.459376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.519 BaseBdev1 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.519 [ 00:08:27.519 { 00:08:27.519 "name": "BaseBdev1", 00:08:27.519 "aliases": [ 00:08:27.519 "3a5db35e-456c-4cde-a2f9-c1cd819b459b" 00:08:27.519 ], 00:08:27.519 "product_name": "Malloc disk", 00:08:27.519 "block_size": 512, 00:08:27.519 "num_blocks": 65536, 00:08:27.519 "uuid": "3a5db35e-456c-4cde-a2f9-c1cd819b459b", 00:08:27.519 "assigned_rate_limits": { 00:08:27.519 "rw_ios_per_sec": 0, 00:08:27.519 "rw_mbytes_per_sec": 0, 00:08:27.519 "r_mbytes_per_sec": 0, 00:08:27.519 "w_mbytes_per_sec": 0 00:08:27.519 }, 00:08:27.519 "claimed": true, 00:08:27.519 "claim_type": "exclusive_write", 00:08:27.519 "zoned": false, 00:08:27.519 "supported_io_types": { 00:08:27.519 "read": true, 00:08:27.519 "write": true, 00:08:27.519 "unmap": true, 00:08:27.519 "flush": true, 00:08:27.519 "reset": true, 00:08:27.519 "nvme_admin": false, 00:08:27.519 "nvme_io": false, 00:08:27.519 "nvme_io_md": false, 00:08:27.519 "write_zeroes": true, 00:08:27.519 "zcopy": true, 00:08:27.519 "get_zone_info": false, 00:08:27.519 "zone_management": false, 00:08:27.519 "zone_append": false, 00:08:27.519 "compare": false, 00:08:27.519 "compare_and_write": false, 00:08:27.519 "abort": true, 00:08:27.519 "seek_hole": false, 00:08:27.519 "seek_data": false, 00:08:27.519 "copy": true, 00:08:27.519 "nvme_iov_md": false 00:08:27.519 }, 00:08:27.519 "memory_domains": [ 00:08:27.519 { 00:08:27.519 "dma_device_id": "system", 00:08:27.519 "dma_device_type": 1 00:08:27.519 }, 00:08:27.519 { 00:08:27.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.519 "dma_device_type": 2 00:08:27.519 } 00:08:27.519 ], 00:08:27.519 "driver_specific": {} 00:08:27.519 } 00:08:27.519 ] 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.519 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.519 "name": "Existed_Raid", 00:08:27.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.519 "strip_size_kb": 0, 00:08:27.519 "state": "configuring", 00:08:27.519 "raid_level": "raid1", 00:08:27.519 "superblock": false, 00:08:27.519 "num_base_bdevs": 2, 00:08:27.519 "num_base_bdevs_discovered": 1, 00:08:27.519 "num_base_bdevs_operational": 2, 00:08:27.519 "base_bdevs_list": [ 00:08:27.519 { 00:08:27.519 "name": "BaseBdev1", 00:08:27.519 "uuid": "3a5db35e-456c-4cde-a2f9-c1cd819b459b", 00:08:27.519 "is_configured": true, 00:08:27.519 "data_offset": 0, 00:08:27.520 "data_size": 65536 00:08:27.520 }, 00:08:27.520 { 00:08:27.520 "name": "BaseBdev2", 00:08:27.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.520 "is_configured": false, 00:08:27.520 "data_offset": 0, 00:08:27.520 "data_size": 0 00:08:27.520 } 00:08:27.520 ] 00:08:27.520 }' 00:08:27.520 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.520 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.778 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.778 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.778 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.778 [2024-12-06 18:05:39.910786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.778 [2024-12-06 18:05:39.910921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:27.778 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.778 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:27.778 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.778 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.778 [2024-12-06 18:05:39.918838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.778 [2024-12-06 18:05:39.921153] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.778 [2024-12-06 18:05:39.921265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.779 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.039 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.039 "name": "Existed_Raid", 00:08:28.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.039 "strip_size_kb": 0, 00:08:28.039 "state": "configuring", 00:08:28.039 "raid_level": "raid1", 00:08:28.039 "superblock": false, 00:08:28.039 "num_base_bdevs": 2, 00:08:28.039 "num_base_bdevs_discovered": 1, 00:08:28.039 "num_base_bdevs_operational": 2, 00:08:28.039 "base_bdevs_list": [ 00:08:28.039 { 00:08:28.039 "name": "BaseBdev1", 00:08:28.039 "uuid": "3a5db35e-456c-4cde-a2f9-c1cd819b459b", 00:08:28.039 "is_configured": true, 00:08:28.039 "data_offset": 0, 00:08:28.039 "data_size": 65536 00:08:28.039 }, 00:08:28.039 { 00:08:28.039 "name": "BaseBdev2", 00:08:28.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.039 "is_configured": false, 00:08:28.039 "data_offset": 0, 00:08:28.039 "data_size": 0 00:08:28.039 } 00:08:28.039 ] 00:08:28.039 }' 00:08:28.039 18:05:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.039 18:05:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 [2024-12-06 18:05:40.382540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.300 [2024-12-06 18:05:40.382618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.300 [2024-12-06 18:05:40.382629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:28.300 [2024-12-06 18:05:40.382928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:28.300 [2024-12-06 18:05:40.383175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.300 [2024-12-06 18:05:40.383193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:28.300 [2024-12-06 18:05:40.383544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.300 BaseBdev2 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 [ 00:08:28.300 { 00:08:28.300 "name": "BaseBdev2", 00:08:28.300 "aliases": [ 00:08:28.300 "a872d4ca-b4b8-4b1d-8060-3fd2b2b997e4" 00:08:28.300 ], 00:08:28.300 "product_name": "Malloc disk", 00:08:28.300 "block_size": 512, 00:08:28.300 "num_blocks": 65536, 00:08:28.300 "uuid": "a872d4ca-b4b8-4b1d-8060-3fd2b2b997e4", 00:08:28.300 "assigned_rate_limits": { 00:08:28.300 "rw_ios_per_sec": 0, 00:08:28.300 "rw_mbytes_per_sec": 0, 00:08:28.300 "r_mbytes_per_sec": 0, 00:08:28.300 "w_mbytes_per_sec": 0 00:08:28.300 }, 00:08:28.300 "claimed": true, 00:08:28.300 "claim_type": "exclusive_write", 00:08:28.300 "zoned": false, 00:08:28.300 "supported_io_types": { 00:08:28.300 "read": true, 00:08:28.300 "write": true, 00:08:28.300 "unmap": true, 00:08:28.300 "flush": true, 00:08:28.300 "reset": true, 00:08:28.300 "nvme_admin": false, 00:08:28.300 "nvme_io": false, 00:08:28.300 "nvme_io_md": false, 00:08:28.300 "write_zeroes": true, 00:08:28.300 "zcopy": true, 00:08:28.300 "get_zone_info": false, 00:08:28.300 "zone_management": false, 00:08:28.300 "zone_append": false, 00:08:28.300 "compare": false, 00:08:28.300 "compare_and_write": false, 00:08:28.300 "abort": true, 00:08:28.300 "seek_hole": false, 00:08:28.300 "seek_data": false, 00:08:28.300 "copy": true, 00:08:28.300 "nvme_iov_md": false 00:08:28.300 }, 00:08:28.300 "memory_domains": [ 00:08:28.300 { 00:08:28.300 "dma_device_id": "system", 00:08:28.300 "dma_device_type": 1 00:08:28.300 }, 00:08:28.300 { 00:08:28.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.300 "dma_device_type": 2 00:08:28.300 } 00:08:28.300 ], 00:08:28.300 "driver_specific": {} 00:08:28.300 } 00:08:28.300 ] 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.300 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.300 "name": "Existed_Raid", 00:08:28.300 "uuid": "aee58cae-5d66-4dd5-9f13-041fe96061b9", 00:08:28.300 "strip_size_kb": 0, 00:08:28.300 "state": "online", 00:08:28.300 "raid_level": "raid1", 00:08:28.300 "superblock": false, 00:08:28.300 "num_base_bdevs": 2, 00:08:28.300 "num_base_bdevs_discovered": 2, 00:08:28.300 "num_base_bdevs_operational": 2, 00:08:28.300 "base_bdevs_list": [ 00:08:28.300 { 00:08:28.300 "name": "BaseBdev1", 00:08:28.300 "uuid": "3a5db35e-456c-4cde-a2f9-c1cd819b459b", 00:08:28.301 "is_configured": true, 00:08:28.301 "data_offset": 0, 00:08:28.301 "data_size": 65536 00:08:28.301 }, 00:08:28.301 { 00:08:28.301 "name": "BaseBdev2", 00:08:28.301 "uuid": "a872d4ca-b4b8-4b1d-8060-3fd2b2b997e4", 00:08:28.301 "is_configured": true, 00:08:28.301 "data_offset": 0, 00:08:28.301 "data_size": 65536 00:08:28.301 } 00:08:28.301 ] 00:08:28.301 }' 00:08:28.301 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.301 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.872 [2024-12-06 18:05:40.914087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.872 "name": "Existed_Raid", 00:08:28.872 "aliases": [ 00:08:28.872 "aee58cae-5d66-4dd5-9f13-041fe96061b9" 00:08:28.872 ], 00:08:28.872 "product_name": "Raid Volume", 00:08:28.872 "block_size": 512, 00:08:28.872 "num_blocks": 65536, 00:08:28.872 "uuid": "aee58cae-5d66-4dd5-9f13-041fe96061b9", 00:08:28.872 "assigned_rate_limits": { 00:08:28.872 "rw_ios_per_sec": 0, 00:08:28.872 "rw_mbytes_per_sec": 0, 00:08:28.872 "r_mbytes_per_sec": 0, 00:08:28.872 "w_mbytes_per_sec": 0 00:08:28.872 }, 00:08:28.872 "claimed": false, 00:08:28.872 "zoned": false, 00:08:28.872 "supported_io_types": { 00:08:28.872 "read": true, 00:08:28.872 "write": true, 00:08:28.872 "unmap": false, 00:08:28.872 "flush": false, 00:08:28.872 "reset": true, 00:08:28.872 "nvme_admin": false, 00:08:28.872 "nvme_io": false, 00:08:28.872 "nvme_io_md": false, 00:08:28.872 "write_zeroes": true, 00:08:28.872 "zcopy": false, 00:08:28.872 "get_zone_info": false, 00:08:28.872 "zone_management": false, 00:08:28.872 "zone_append": false, 00:08:28.872 "compare": false, 00:08:28.872 "compare_and_write": false, 00:08:28.872 "abort": false, 00:08:28.872 "seek_hole": false, 00:08:28.872 "seek_data": false, 00:08:28.872 "copy": false, 00:08:28.872 "nvme_iov_md": false 00:08:28.872 }, 00:08:28.872 "memory_domains": [ 00:08:28.872 { 00:08:28.872 "dma_device_id": "system", 00:08:28.872 "dma_device_type": 1 00:08:28.872 }, 00:08:28.872 { 00:08:28.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.872 "dma_device_type": 2 00:08:28.872 }, 00:08:28.872 { 00:08:28.872 "dma_device_id": "system", 00:08:28.872 "dma_device_type": 1 00:08:28.872 }, 00:08:28.872 { 00:08:28.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.872 "dma_device_type": 2 00:08:28.872 } 00:08:28.872 ], 00:08:28.872 "driver_specific": { 00:08:28.872 "raid": { 00:08:28.872 "uuid": "aee58cae-5d66-4dd5-9f13-041fe96061b9", 00:08:28.872 "strip_size_kb": 0, 00:08:28.872 "state": "online", 00:08:28.872 "raid_level": "raid1", 00:08:28.872 "superblock": false, 00:08:28.872 "num_base_bdevs": 2, 00:08:28.872 "num_base_bdevs_discovered": 2, 00:08:28.872 "num_base_bdevs_operational": 2, 00:08:28.872 "base_bdevs_list": [ 00:08:28.872 { 00:08:28.872 "name": "BaseBdev1", 00:08:28.872 "uuid": "3a5db35e-456c-4cde-a2f9-c1cd819b459b", 00:08:28.872 "is_configured": true, 00:08:28.872 "data_offset": 0, 00:08:28.872 "data_size": 65536 00:08:28.872 }, 00:08:28.872 { 00:08:28.872 "name": "BaseBdev2", 00:08:28.872 "uuid": "a872d4ca-b4b8-4b1d-8060-3fd2b2b997e4", 00:08:28.872 "is_configured": true, 00:08:28.872 "data_offset": 0, 00:08:28.872 "data_size": 65536 00:08:28.872 } 00:08:28.872 ] 00:08:28.872 } 00:08:28.872 } 00:08:28.872 }' 00:08:28.872 18:05:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.872 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:28.872 BaseBdev2' 00:08:28.872 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.132 [2024-12-06 18:05:41.137431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:29.132 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.133 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.133 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.133 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.133 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.133 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.133 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.133 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.133 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.392 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.392 "name": "Existed_Raid", 00:08:29.392 "uuid": "aee58cae-5d66-4dd5-9f13-041fe96061b9", 00:08:29.392 "strip_size_kb": 0, 00:08:29.392 "state": "online", 00:08:29.392 "raid_level": "raid1", 00:08:29.392 "superblock": false, 00:08:29.392 "num_base_bdevs": 2, 00:08:29.392 "num_base_bdevs_discovered": 1, 00:08:29.392 "num_base_bdevs_operational": 1, 00:08:29.392 "base_bdevs_list": [ 00:08:29.392 { 00:08:29.392 "name": null, 00:08:29.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.392 "is_configured": false, 00:08:29.392 "data_offset": 0, 00:08:29.392 "data_size": 65536 00:08:29.392 }, 00:08:29.392 { 00:08:29.392 "name": "BaseBdev2", 00:08:29.392 "uuid": "a872d4ca-b4b8-4b1d-8060-3fd2b2b997e4", 00:08:29.392 "is_configured": true, 00:08:29.392 "data_offset": 0, 00:08:29.392 "data_size": 65536 00:08:29.392 } 00:08:29.392 ] 00:08:29.392 }' 00:08:29.392 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.392 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.655 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.655 [2024-12-06 18:05:41.756495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:29.655 [2024-12-06 18:05:41.756701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.920 [2024-12-06 18:05:41.874244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.920 [2024-12-06 18:05:41.874317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.920 [2024-12-06 18:05:41.874332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63100 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63100 ']' 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63100 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63100 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63100' 00:08:29.920 killing process with pid 63100 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63100 00:08:29.920 [2024-12-06 18:05:41.957912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.920 18:05:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63100 00:08:29.920 [2024-12-06 18:05:41.978105] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.303 ************************************ 00:08:31.303 END TEST raid_state_function_test 00:08:31.303 ************************************ 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.303 00:08:31.303 real 0m5.433s 00:08:31.303 user 0m7.782s 00:08:31.303 sys 0m0.830s 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.303 18:05:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:31.303 18:05:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.303 18:05:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.303 18:05:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.303 ************************************ 00:08:31.303 START TEST raid_state_function_test_sb 00:08:31.303 ************************************ 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63353 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63353' 00:08:31.303 Process raid pid: 63353 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63353 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63353 ']' 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.303 18:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.562 [2024-12-06 18:05:43.492611] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:31.562 [2024-12-06 18:05:43.492924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.562 [2024-12-06 18:05:43.668359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.820 [2024-12-06 18:05:43.813845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.078 [2024-12-06 18:05:44.062427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.078 [2024-12-06 18:05:44.062603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.337 [2024-12-06 18:05:44.406199] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.337 [2024-12-06 18:05:44.406362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.337 [2024-12-06 18:05:44.406403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.337 [2024-12-06 18:05:44.406434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.337 "name": "Existed_Raid", 00:08:32.337 "uuid": "2bec78b6-745e-4657-b5ef-ada3af599a3b", 00:08:32.337 "strip_size_kb": 0, 00:08:32.337 "state": "configuring", 00:08:32.337 "raid_level": "raid1", 00:08:32.337 "superblock": true, 00:08:32.337 "num_base_bdevs": 2, 00:08:32.337 "num_base_bdevs_discovered": 0, 00:08:32.337 "num_base_bdevs_operational": 2, 00:08:32.337 "base_bdevs_list": [ 00:08:32.337 { 00:08:32.337 "name": "BaseBdev1", 00:08:32.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.337 "is_configured": false, 00:08:32.337 "data_offset": 0, 00:08:32.337 "data_size": 0 00:08:32.337 }, 00:08:32.337 { 00:08:32.337 "name": "BaseBdev2", 00:08:32.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.337 "is_configured": false, 00:08:32.337 "data_offset": 0, 00:08:32.337 "data_size": 0 00:08:32.337 } 00:08:32.337 ] 00:08:32.337 }' 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.337 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 [2024-12-06 18:05:44.889280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.915 [2024-12-06 18:05:44.889397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 [2024-12-06 18:05:44.901316] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.915 [2024-12-06 18:05:44.901380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.915 [2024-12-06 18:05:44.901392] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.915 [2024-12-06 18:05:44.901406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 [2024-12-06 18:05:44.956420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.915 BaseBdev1 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 [ 00:08:32.915 { 00:08:32.915 "name": "BaseBdev1", 00:08:32.915 "aliases": [ 00:08:32.915 "2fa00aa1-8502-4309-9ecc-a76046948821" 00:08:32.915 ], 00:08:32.915 "product_name": "Malloc disk", 00:08:32.915 "block_size": 512, 00:08:32.915 "num_blocks": 65536, 00:08:32.915 "uuid": "2fa00aa1-8502-4309-9ecc-a76046948821", 00:08:32.915 "assigned_rate_limits": { 00:08:32.915 "rw_ios_per_sec": 0, 00:08:32.915 "rw_mbytes_per_sec": 0, 00:08:32.915 "r_mbytes_per_sec": 0, 00:08:32.915 "w_mbytes_per_sec": 0 00:08:32.915 }, 00:08:32.915 "claimed": true, 00:08:32.915 "claim_type": "exclusive_write", 00:08:32.915 "zoned": false, 00:08:32.915 "supported_io_types": { 00:08:32.915 "read": true, 00:08:32.915 "write": true, 00:08:32.915 "unmap": true, 00:08:32.915 "flush": true, 00:08:32.915 "reset": true, 00:08:32.915 "nvme_admin": false, 00:08:32.915 "nvme_io": false, 00:08:32.915 "nvme_io_md": false, 00:08:32.915 "write_zeroes": true, 00:08:32.915 "zcopy": true, 00:08:32.915 "get_zone_info": false, 00:08:32.915 "zone_management": false, 00:08:32.915 "zone_append": false, 00:08:32.915 "compare": false, 00:08:32.915 "compare_and_write": false, 00:08:32.915 "abort": true, 00:08:32.915 "seek_hole": false, 00:08:32.915 "seek_data": false, 00:08:32.915 "copy": true, 00:08:32.915 "nvme_iov_md": false 00:08:32.915 }, 00:08:32.915 "memory_domains": [ 00:08:32.915 { 00:08:32.915 "dma_device_id": "system", 00:08:32.915 "dma_device_type": 1 00:08:32.915 }, 00:08:32.915 { 00:08:32.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.915 "dma_device_type": 2 00:08:32.915 } 00:08:32.915 ], 00:08:32.915 "driver_specific": {} 00:08:32.915 } 00:08:32.915 ] 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.916 18:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.916 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.916 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.916 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.916 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.916 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.916 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.916 "name": "Existed_Raid", 00:08:32.916 "uuid": "c862e33c-3ce9-4261-ae74-4925ad6ca5b7", 00:08:32.916 "strip_size_kb": 0, 00:08:32.916 "state": "configuring", 00:08:32.916 "raid_level": "raid1", 00:08:32.916 "superblock": true, 00:08:32.916 "num_base_bdevs": 2, 00:08:32.916 "num_base_bdevs_discovered": 1, 00:08:32.916 "num_base_bdevs_operational": 2, 00:08:32.916 "base_bdevs_list": [ 00:08:32.916 { 00:08:32.916 "name": "BaseBdev1", 00:08:32.916 "uuid": "2fa00aa1-8502-4309-9ecc-a76046948821", 00:08:32.916 "is_configured": true, 00:08:32.916 "data_offset": 2048, 00:08:32.916 "data_size": 63488 00:08:32.916 }, 00:08:32.916 { 00:08:32.916 "name": "BaseBdev2", 00:08:32.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.916 "is_configured": false, 00:08:32.916 "data_offset": 0, 00:08:32.916 "data_size": 0 00:08:32.916 } 00:08:32.916 ] 00:08:32.916 }' 00:08:32.916 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.916 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.521 [2024-12-06 18:05:45.439855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.521 [2024-12-06 18:05:45.439923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.521 [2024-12-06 18:05:45.451939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.521 [2024-12-06 18:05:45.454222] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.521 [2024-12-06 18:05:45.454281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.521 "name": "Existed_Raid", 00:08:33.521 "uuid": "4ff9b522-c748-446c-ac6a-55ce046eb656", 00:08:33.521 "strip_size_kb": 0, 00:08:33.521 "state": "configuring", 00:08:33.521 "raid_level": "raid1", 00:08:33.521 "superblock": true, 00:08:33.521 "num_base_bdevs": 2, 00:08:33.521 "num_base_bdevs_discovered": 1, 00:08:33.521 "num_base_bdevs_operational": 2, 00:08:33.521 "base_bdevs_list": [ 00:08:33.521 { 00:08:33.521 "name": "BaseBdev1", 00:08:33.521 "uuid": "2fa00aa1-8502-4309-9ecc-a76046948821", 00:08:33.521 "is_configured": true, 00:08:33.521 "data_offset": 2048, 00:08:33.521 "data_size": 63488 00:08:33.521 }, 00:08:33.521 { 00:08:33.521 "name": "BaseBdev2", 00:08:33.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.521 "is_configured": false, 00:08:33.521 "data_offset": 0, 00:08:33.521 "data_size": 0 00:08:33.521 } 00:08:33.521 ] 00:08:33.521 }' 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.521 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.780 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.780 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.780 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.780 [2024-12-06 18:05:45.942611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.780 [2024-12-06 18:05:45.943009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:33.780 [2024-12-06 18:05:45.943097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:33.780 [2024-12-06 18:05:45.943460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:33.780 BaseBdev2 00:08:33.780 [2024-12-06 18:05:45.943713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:33.780 [2024-12-06 18:05:45.943770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:33.780 [2024-12-06 18:05:45.943993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.780 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.040 [ 00:08:34.040 { 00:08:34.040 "name": "BaseBdev2", 00:08:34.040 "aliases": [ 00:08:34.040 "8a33ab7d-2529-42f7-8e9a-a0fe7667de98" 00:08:34.040 ], 00:08:34.040 "product_name": "Malloc disk", 00:08:34.040 "block_size": 512, 00:08:34.040 "num_blocks": 65536, 00:08:34.040 "uuid": "8a33ab7d-2529-42f7-8e9a-a0fe7667de98", 00:08:34.040 "assigned_rate_limits": { 00:08:34.040 "rw_ios_per_sec": 0, 00:08:34.040 "rw_mbytes_per_sec": 0, 00:08:34.040 "r_mbytes_per_sec": 0, 00:08:34.040 "w_mbytes_per_sec": 0 00:08:34.040 }, 00:08:34.040 "claimed": true, 00:08:34.040 "claim_type": "exclusive_write", 00:08:34.040 "zoned": false, 00:08:34.040 "supported_io_types": { 00:08:34.040 "read": true, 00:08:34.040 "write": true, 00:08:34.040 "unmap": true, 00:08:34.040 "flush": true, 00:08:34.040 "reset": true, 00:08:34.040 "nvme_admin": false, 00:08:34.040 "nvme_io": false, 00:08:34.040 "nvme_io_md": false, 00:08:34.040 "write_zeroes": true, 00:08:34.040 "zcopy": true, 00:08:34.040 "get_zone_info": false, 00:08:34.040 "zone_management": false, 00:08:34.040 "zone_append": false, 00:08:34.040 "compare": false, 00:08:34.040 "compare_and_write": false, 00:08:34.040 "abort": true, 00:08:34.040 "seek_hole": false, 00:08:34.040 "seek_data": false, 00:08:34.040 "copy": true, 00:08:34.040 "nvme_iov_md": false 00:08:34.040 }, 00:08:34.040 "memory_domains": [ 00:08:34.040 { 00:08:34.040 "dma_device_id": "system", 00:08:34.040 "dma_device_type": 1 00:08:34.040 }, 00:08:34.040 { 00:08:34.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.040 "dma_device_type": 2 00:08:34.040 } 00:08:34.040 ], 00:08:34.040 "driver_specific": {} 00:08:34.040 } 00:08:34.040 ] 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.040 18:05:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.040 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.040 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.040 "name": "Existed_Raid", 00:08:34.040 "uuid": "4ff9b522-c748-446c-ac6a-55ce046eb656", 00:08:34.040 "strip_size_kb": 0, 00:08:34.040 "state": "online", 00:08:34.040 "raid_level": "raid1", 00:08:34.040 "superblock": true, 00:08:34.040 "num_base_bdevs": 2, 00:08:34.040 "num_base_bdevs_discovered": 2, 00:08:34.040 "num_base_bdevs_operational": 2, 00:08:34.040 "base_bdevs_list": [ 00:08:34.040 { 00:08:34.040 "name": "BaseBdev1", 00:08:34.040 "uuid": "2fa00aa1-8502-4309-9ecc-a76046948821", 00:08:34.040 "is_configured": true, 00:08:34.040 "data_offset": 2048, 00:08:34.040 "data_size": 63488 00:08:34.040 }, 00:08:34.040 { 00:08:34.040 "name": "BaseBdev2", 00:08:34.040 "uuid": "8a33ab7d-2529-42f7-8e9a-a0fe7667de98", 00:08:34.040 "is_configured": true, 00:08:34.040 "data_offset": 2048, 00:08:34.040 "data_size": 63488 00:08:34.040 } 00:08:34.040 ] 00:08:34.040 }' 00:08:34.040 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.040 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.299 [2024-12-06 18:05:46.442368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.299 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.558 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.558 "name": "Existed_Raid", 00:08:34.558 "aliases": [ 00:08:34.558 "4ff9b522-c748-446c-ac6a-55ce046eb656" 00:08:34.558 ], 00:08:34.558 "product_name": "Raid Volume", 00:08:34.558 "block_size": 512, 00:08:34.558 "num_blocks": 63488, 00:08:34.558 "uuid": "4ff9b522-c748-446c-ac6a-55ce046eb656", 00:08:34.558 "assigned_rate_limits": { 00:08:34.558 "rw_ios_per_sec": 0, 00:08:34.558 "rw_mbytes_per_sec": 0, 00:08:34.558 "r_mbytes_per_sec": 0, 00:08:34.558 "w_mbytes_per_sec": 0 00:08:34.558 }, 00:08:34.558 "claimed": false, 00:08:34.558 "zoned": false, 00:08:34.558 "supported_io_types": { 00:08:34.558 "read": true, 00:08:34.558 "write": true, 00:08:34.558 "unmap": false, 00:08:34.558 "flush": false, 00:08:34.558 "reset": true, 00:08:34.558 "nvme_admin": false, 00:08:34.558 "nvme_io": false, 00:08:34.558 "nvme_io_md": false, 00:08:34.558 "write_zeroes": true, 00:08:34.558 "zcopy": false, 00:08:34.558 "get_zone_info": false, 00:08:34.558 "zone_management": false, 00:08:34.558 "zone_append": false, 00:08:34.558 "compare": false, 00:08:34.558 "compare_and_write": false, 00:08:34.558 "abort": false, 00:08:34.558 "seek_hole": false, 00:08:34.558 "seek_data": false, 00:08:34.558 "copy": false, 00:08:34.558 "nvme_iov_md": false 00:08:34.558 }, 00:08:34.558 "memory_domains": [ 00:08:34.558 { 00:08:34.558 "dma_device_id": "system", 00:08:34.558 "dma_device_type": 1 00:08:34.559 }, 00:08:34.559 { 00:08:34.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.559 "dma_device_type": 2 00:08:34.559 }, 00:08:34.559 { 00:08:34.559 "dma_device_id": "system", 00:08:34.559 "dma_device_type": 1 00:08:34.559 }, 00:08:34.559 { 00:08:34.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.559 "dma_device_type": 2 00:08:34.559 } 00:08:34.559 ], 00:08:34.559 "driver_specific": { 00:08:34.559 "raid": { 00:08:34.559 "uuid": "4ff9b522-c748-446c-ac6a-55ce046eb656", 00:08:34.559 "strip_size_kb": 0, 00:08:34.559 "state": "online", 00:08:34.559 "raid_level": "raid1", 00:08:34.559 "superblock": true, 00:08:34.559 "num_base_bdevs": 2, 00:08:34.559 "num_base_bdevs_discovered": 2, 00:08:34.559 "num_base_bdevs_operational": 2, 00:08:34.559 "base_bdevs_list": [ 00:08:34.559 { 00:08:34.559 "name": "BaseBdev1", 00:08:34.559 "uuid": "2fa00aa1-8502-4309-9ecc-a76046948821", 00:08:34.559 "is_configured": true, 00:08:34.559 "data_offset": 2048, 00:08:34.559 "data_size": 63488 00:08:34.559 }, 00:08:34.559 { 00:08:34.559 "name": "BaseBdev2", 00:08:34.559 "uuid": "8a33ab7d-2529-42f7-8e9a-a0fe7667de98", 00:08:34.559 "is_configured": true, 00:08:34.559 "data_offset": 2048, 00:08:34.559 "data_size": 63488 00:08:34.559 } 00:08:34.559 ] 00:08:34.559 } 00:08:34.559 } 00:08:34.559 }' 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:34.559 BaseBdev2' 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.559 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.559 [2024-12-06 18:05:46.661763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.817 "name": "Existed_Raid", 00:08:34.817 "uuid": "4ff9b522-c748-446c-ac6a-55ce046eb656", 00:08:34.817 "strip_size_kb": 0, 00:08:34.817 "state": "online", 00:08:34.817 "raid_level": "raid1", 00:08:34.817 "superblock": true, 00:08:34.817 "num_base_bdevs": 2, 00:08:34.817 "num_base_bdevs_discovered": 1, 00:08:34.817 "num_base_bdevs_operational": 1, 00:08:34.817 "base_bdevs_list": [ 00:08:34.817 { 00:08:34.817 "name": null, 00:08:34.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.817 "is_configured": false, 00:08:34.817 "data_offset": 0, 00:08:34.817 "data_size": 63488 00:08:34.817 }, 00:08:34.817 { 00:08:34.817 "name": "BaseBdev2", 00:08:34.817 "uuid": "8a33ab7d-2529-42f7-8e9a-a0fe7667de98", 00:08:34.817 "is_configured": true, 00:08:34.817 "data_offset": 2048, 00:08:34.817 "data_size": 63488 00:08:34.817 } 00:08:34.817 ] 00:08:34.817 }' 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.817 18:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.133 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.133 [2024-12-06 18:05:47.278050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.133 [2024-12-06 18:05:47.278272] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.404 [2024-12-06 18:05:47.393176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.404 [2024-12-06 18:05:47.393336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.404 [2024-12-06 18:05:47.393391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63353 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63353 ']' 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63353 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63353 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63353' 00:08:35.404 killing process with pid 63353 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63353 00:08:35.404 [2024-12-06 18:05:47.491859] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.404 18:05:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63353 00:08:35.404 [2024-12-06 18:05:47.512285] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.785 18:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:36.785 00:08:36.785 real 0m5.446s 00:08:36.785 user 0m7.744s 00:08:36.785 sys 0m0.876s 00:08:36.785 18:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.785 ************************************ 00:08:36.785 END TEST raid_state_function_test_sb 00:08:36.785 ************************************ 00:08:36.785 18:05:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.785 18:05:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:36.785 18:05:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:36.785 18:05:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.785 18:05:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.785 ************************************ 00:08:36.785 START TEST raid_superblock_test 00:08:36.785 ************************************ 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63605 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63605 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63605 ']' 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.785 18:05:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.044 [2024-12-06 18:05:48.983752] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:37.044 [2024-12-06 18:05:48.983990] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63605 ] 00:08:37.044 [2024-12-06 18:05:49.147148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.303 [2024-12-06 18:05:49.286550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.562 [2024-12-06 18:05:49.535252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.562 [2024-12-06 18:05:49.535400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.821 malloc1 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.821 [2024-12-06 18:05:49.951056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:37.821 [2024-12-06 18:05:49.951212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.821 [2024-12-06 18:05:49.951274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:37.821 [2024-12-06 18:05:49.951319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.821 [2024-12-06 18:05:49.953824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.821 [2024-12-06 18:05:49.953902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:37.821 pt1 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.821 18:05:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.080 malloc2 00:08:38.080 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.081 [2024-12-06 18:05:50.016899] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:38.081 [2024-12-06 18:05:50.016966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.081 [2024-12-06 18:05:50.016996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:38.081 [2024-12-06 18:05:50.017008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.081 [2024-12-06 18:05:50.019470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.081 [2024-12-06 18:05:50.019510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:38.081 pt2 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.081 [2024-12-06 18:05:50.024937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:38.081 [2024-12-06 18:05:50.026980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:38.081 [2024-12-06 18:05:50.027198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:38.081 [2024-12-06 18:05:50.027220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:38.081 [2024-12-06 18:05:50.027532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:38.081 [2024-12-06 18:05:50.027844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:38.081 [2024-12-06 18:05:50.027873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:38.081 [2024-12-06 18:05:50.028102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.081 "name": "raid_bdev1", 00:08:38.081 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:38.081 "strip_size_kb": 0, 00:08:38.081 "state": "online", 00:08:38.081 "raid_level": "raid1", 00:08:38.081 "superblock": true, 00:08:38.081 "num_base_bdevs": 2, 00:08:38.081 "num_base_bdevs_discovered": 2, 00:08:38.081 "num_base_bdevs_operational": 2, 00:08:38.081 "base_bdevs_list": [ 00:08:38.081 { 00:08:38.081 "name": "pt1", 00:08:38.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.081 "is_configured": true, 00:08:38.081 "data_offset": 2048, 00:08:38.081 "data_size": 63488 00:08:38.081 }, 00:08:38.081 { 00:08:38.081 "name": "pt2", 00:08:38.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.081 "is_configured": true, 00:08:38.081 "data_offset": 2048, 00:08:38.081 "data_size": 63488 00:08:38.081 } 00:08:38.081 ] 00:08:38.081 }' 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.081 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.340 [2024-12-06 18:05:50.460564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.340 "name": "raid_bdev1", 00:08:38.340 "aliases": [ 00:08:38.340 "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b" 00:08:38.340 ], 00:08:38.340 "product_name": "Raid Volume", 00:08:38.340 "block_size": 512, 00:08:38.340 "num_blocks": 63488, 00:08:38.340 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:38.340 "assigned_rate_limits": { 00:08:38.340 "rw_ios_per_sec": 0, 00:08:38.340 "rw_mbytes_per_sec": 0, 00:08:38.340 "r_mbytes_per_sec": 0, 00:08:38.340 "w_mbytes_per_sec": 0 00:08:38.340 }, 00:08:38.340 "claimed": false, 00:08:38.340 "zoned": false, 00:08:38.340 "supported_io_types": { 00:08:38.340 "read": true, 00:08:38.340 "write": true, 00:08:38.340 "unmap": false, 00:08:38.340 "flush": false, 00:08:38.340 "reset": true, 00:08:38.340 "nvme_admin": false, 00:08:38.340 "nvme_io": false, 00:08:38.340 "nvme_io_md": false, 00:08:38.340 "write_zeroes": true, 00:08:38.340 "zcopy": false, 00:08:38.340 "get_zone_info": false, 00:08:38.340 "zone_management": false, 00:08:38.340 "zone_append": false, 00:08:38.340 "compare": false, 00:08:38.340 "compare_and_write": false, 00:08:38.340 "abort": false, 00:08:38.340 "seek_hole": false, 00:08:38.340 "seek_data": false, 00:08:38.340 "copy": false, 00:08:38.340 "nvme_iov_md": false 00:08:38.340 }, 00:08:38.340 "memory_domains": [ 00:08:38.340 { 00:08:38.340 "dma_device_id": "system", 00:08:38.340 "dma_device_type": 1 00:08:38.340 }, 00:08:38.340 { 00:08:38.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.340 "dma_device_type": 2 00:08:38.340 }, 00:08:38.340 { 00:08:38.340 "dma_device_id": "system", 00:08:38.340 "dma_device_type": 1 00:08:38.340 }, 00:08:38.340 { 00:08:38.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.340 "dma_device_type": 2 00:08:38.340 } 00:08:38.340 ], 00:08:38.340 "driver_specific": { 00:08:38.340 "raid": { 00:08:38.340 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:38.340 "strip_size_kb": 0, 00:08:38.340 "state": "online", 00:08:38.340 "raid_level": "raid1", 00:08:38.340 "superblock": true, 00:08:38.340 "num_base_bdevs": 2, 00:08:38.340 "num_base_bdevs_discovered": 2, 00:08:38.340 "num_base_bdevs_operational": 2, 00:08:38.340 "base_bdevs_list": [ 00:08:38.340 { 00:08:38.340 "name": "pt1", 00:08:38.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.340 "is_configured": true, 00:08:38.340 "data_offset": 2048, 00:08:38.340 "data_size": 63488 00:08:38.340 }, 00:08:38.340 { 00:08:38.340 "name": "pt2", 00:08:38.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.340 "is_configured": true, 00:08:38.340 "data_offset": 2048, 00:08:38.340 "data_size": 63488 00:08:38.340 } 00:08:38.340 ] 00:08:38.340 } 00:08:38.340 } 00:08:38.340 }' 00:08:38.340 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.599 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:38.599 pt2' 00:08:38.599 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.599 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:38.599 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.599 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:38.599 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 [2024-12-06 18:05:50.712147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b ']' 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 [2024-12-06 18:05:50.755697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.600 [2024-12-06 18:05:50.755777] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.600 [2024-12-06 18:05:50.755904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.600 [2024-12-06 18:05:50.756012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.600 [2024-12-06 18:05:50.756077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.600 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.860 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.861 [2024-12-06 18:05:50.911487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:38.861 [2024-12-06 18:05:50.913718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:38.861 [2024-12-06 18:05:50.913846] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:38.861 [2024-12-06 18:05:50.913949] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raidrequest: 00:08:38.861 bdev found on bdev malloc2 00:08:38.861 [2024-12-06 18:05:50.914026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.861 [2024-12-06 18:05:50.914041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:38.861 { 00:08:38.861 "name": "raid_bdev1", 00:08:38.861 "raid_level": "raid1", 00:08:38.861 "base_bdevs": [ 00:08:38.861 "malloc1", 00:08:38.861 "malloc2" 00:08:38.861 ], 00:08:38.861 "superblock": false, 00:08:38.861 "method": "bdev_raid_create", 00:08:38.861 "req_id": 1 00:08:38.861 } 00:08:38.861 Got JSON-RPC error response 00:08:38.861 response: 00:08:38.861 { 00:08:38.861 "code": -17, 00:08:38.861 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:38.861 } 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.861 [2024-12-06 18:05:50.971357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:38.861 [2024-12-06 18:05:50.971437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.861 [2024-12-06 18:05:50.971463] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:38.861 [2024-12-06 18:05:50.971477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.861 [2024-12-06 18:05:50.974069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.861 [2024-12-06 18:05:50.974126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:38.861 [2024-12-06 18:05:50.974226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:38.861 [2024-12-06 18:05:50.974286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:38.861 pt1 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.861 18:05:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.120 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.120 "name": "raid_bdev1", 00:08:39.120 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:39.120 "strip_size_kb": 0, 00:08:39.120 "state": "configuring", 00:08:39.120 "raid_level": "raid1", 00:08:39.120 "superblock": true, 00:08:39.120 "num_base_bdevs": 2, 00:08:39.120 "num_base_bdevs_discovered": 1, 00:08:39.120 "num_base_bdevs_operational": 2, 00:08:39.120 "base_bdevs_list": [ 00:08:39.120 { 00:08:39.120 "name": "pt1", 00:08:39.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.120 "is_configured": true, 00:08:39.120 "data_offset": 2048, 00:08:39.120 "data_size": 63488 00:08:39.120 }, 00:08:39.120 { 00:08:39.120 "name": null, 00:08:39.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.120 "is_configured": false, 00:08:39.120 "data_offset": 2048, 00:08:39.120 "data_size": 63488 00:08:39.120 } 00:08:39.120 ] 00:08:39.120 }' 00:08:39.120 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.120 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.380 [2024-12-06 18:05:51.462554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:39.380 [2024-12-06 18:05:51.462699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.380 [2024-12-06 18:05:51.462762] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:39.380 [2024-12-06 18:05:51.462805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.380 [2024-12-06 18:05:51.463385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.380 [2024-12-06 18:05:51.463465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:39.380 [2024-12-06 18:05:51.463593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:39.380 [2024-12-06 18:05:51.463658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:39.380 [2024-12-06 18:05:51.463819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:39.380 [2024-12-06 18:05:51.463866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:39.380 [2024-12-06 18:05:51.464183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:39.380 [2024-12-06 18:05:51.464417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:39.380 [2024-12-06 18:05:51.464461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:39.380 [2024-12-06 18:05:51.464693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.380 pt2 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.380 "name": "raid_bdev1", 00:08:39.380 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:39.380 "strip_size_kb": 0, 00:08:39.380 "state": "online", 00:08:39.380 "raid_level": "raid1", 00:08:39.380 "superblock": true, 00:08:39.380 "num_base_bdevs": 2, 00:08:39.380 "num_base_bdevs_discovered": 2, 00:08:39.380 "num_base_bdevs_operational": 2, 00:08:39.380 "base_bdevs_list": [ 00:08:39.380 { 00:08:39.380 "name": "pt1", 00:08:39.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.380 "is_configured": true, 00:08:39.380 "data_offset": 2048, 00:08:39.380 "data_size": 63488 00:08:39.380 }, 00:08:39.380 { 00:08:39.380 "name": "pt2", 00:08:39.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.380 "is_configured": true, 00:08:39.380 "data_offset": 2048, 00:08:39.380 "data_size": 63488 00:08:39.380 } 00:08:39.380 ] 00:08:39.380 }' 00:08:39.380 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.381 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.947 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.947 [2024-12-06 18:05:51.946022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.948 18:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.948 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.948 "name": "raid_bdev1", 00:08:39.948 "aliases": [ 00:08:39.948 "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b" 00:08:39.948 ], 00:08:39.948 "product_name": "Raid Volume", 00:08:39.948 "block_size": 512, 00:08:39.948 "num_blocks": 63488, 00:08:39.948 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:39.948 "assigned_rate_limits": { 00:08:39.948 "rw_ios_per_sec": 0, 00:08:39.948 "rw_mbytes_per_sec": 0, 00:08:39.948 "r_mbytes_per_sec": 0, 00:08:39.948 "w_mbytes_per_sec": 0 00:08:39.948 }, 00:08:39.948 "claimed": false, 00:08:39.948 "zoned": false, 00:08:39.948 "supported_io_types": { 00:08:39.948 "read": true, 00:08:39.948 "write": true, 00:08:39.948 "unmap": false, 00:08:39.948 "flush": false, 00:08:39.948 "reset": true, 00:08:39.948 "nvme_admin": false, 00:08:39.948 "nvme_io": false, 00:08:39.948 "nvme_io_md": false, 00:08:39.948 "write_zeroes": true, 00:08:39.948 "zcopy": false, 00:08:39.948 "get_zone_info": false, 00:08:39.948 "zone_management": false, 00:08:39.948 "zone_append": false, 00:08:39.948 "compare": false, 00:08:39.948 "compare_and_write": false, 00:08:39.948 "abort": false, 00:08:39.948 "seek_hole": false, 00:08:39.948 "seek_data": false, 00:08:39.948 "copy": false, 00:08:39.948 "nvme_iov_md": false 00:08:39.948 }, 00:08:39.948 "memory_domains": [ 00:08:39.948 { 00:08:39.948 "dma_device_id": "system", 00:08:39.948 "dma_device_type": 1 00:08:39.948 }, 00:08:39.948 { 00:08:39.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.948 "dma_device_type": 2 00:08:39.948 }, 00:08:39.948 { 00:08:39.948 "dma_device_id": "system", 00:08:39.948 "dma_device_type": 1 00:08:39.948 }, 00:08:39.948 { 00:08:39.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.948 "dma_device_type": 2 00:08:39.948 } 00:08:39.948 ], 00:08:39.948 "driver_specific": { 00:08:39.948 "raid": { 00:08:39.948 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:39.948 "strip_size_kb": 0, 00:08:39.948 "state": "online", 00:08:39.948 "raid_level": "raid1", 00:08:39.948 "superblock": true, 00:08:39.948 "num_base_bdevs": 2, 00:08:39.948 "num_base_bdevs_discovered": 2, 00:08:39.948 "num_base_bdevs_operational": 2, 00:08:39.948 "base_bdevs_list": [ 00:08:39.948 { 00:08:39.948 "name": "pt1", 00:08:39.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.948 "is_configured": true, 00:08:39.948 "data_offset": 2048, 00:08:39.948 "data_size": 63488 00:08:39.948 }, 00:08:39.948 { 00:08:39.948 "name": "pt2", 00:08:39.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.948 "is_configured": true, 00:08:39.948 "data_offset": 2048, 00:08:39.948 "data_size": 63488 00:08:39.948 } 00:08:39.948 ] 00:08:39.948 } 00:08:39.948 } 00:08:39.948 }' 00:08:39.948 18:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.948 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:39.948 pt2' 00:08:39.948 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.948 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.948 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.948 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:39.948 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.948 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.948 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.948 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.207 [2024-12-06 18:05:52.181630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b '!=' 92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b ']' 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.207 [2024-12-06 18:05:52.229329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.207 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.207 "name": "raid_bdev1", 00:08:40.207 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:40.207 "strip_size_kb": 0, 00:08:40.207 "state": "online", 00:08:40.207 "raid_level": "raid1", 00:08:40.207 "superblock": true, 00:08:40.207 "num_base_bdevs": 2, 00:08:40.207 "num_base_bdevs_discovered": 1, 00:08:40.207 "num_base_bdevs_operational": 1, 00:08:40.207 "base_bdevs_list": [ 00:08:40.207 { 00:08:40.207 "name": null, 00:08:40.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.207 "is_configured": false, 00:08:40.207 "data_offset": 0, 00:08:40.207 "data_size": 63488 00:08:40.207 }, 00:08:40.207 { 00:08:40.207 "name": "pt2", 00:08:40.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.207 "is_configured": true, 00:08:40.207 "data_offset": 2048, 00:08:40.207 "data_size": 63488 00:08:40.208 } 00:08:40.208 ] 00:08:40.208 }' 00:08:40.208 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.208 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.776 [2024-12-06 18:05:52.724445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:40.776 [2024-12-06 18:05:52.724548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.776 [2024-12-06 18:05:52.724648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.776 [2024-12-06 18:05:52.724705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.776 [2024-12-06 18:05:52.724719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.776 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.776 [2024-12-06 18:05:52.804293] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:40.776 [2024-12-06 18:05:52.804363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.776 [2024-12-06 18:05:52.804383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:40.776 [2024-12-06 18:05:52.804396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.776 [2024-12-06 18:05:52.806959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.776 [2024-12-06 18:05:52.807054] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:40.776 [2024-12-06 18:05:52.807169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:40.776 [2024-12-06 18:05:52.807228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:40.776 [2024-12-06 18:05:52.807357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:40.776 [2024-12-06 18:05:52.807372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:40.776 [2024-12-06 18:05:52.807651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:40.776 [2024-12-06 18:05:52.807824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:40.776 [2024-12-06 18:05:52.807836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:40.777 [2024-12-06 18:05:52.808009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.777 pt2 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.777 "name": "raid_bdev1", 00:08:40.777 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:40.777 "strip_size_kb": 0, 00:08:40.777 "state": "online", 00:08:40.777 "raid_level": "raid1", 00:08:40.777 "superblock": true, 00:08:40.777 "num_base_bdevs": 2, 00:08:40.777 "num_base_bdevs_discovered": 1, 00:08:40.777 "num_base_bdevs_operational": 1, 00:08:40.777 "base_bdevs_list": [ 00:08:40.777 { 00:08:40.777 "name": null, 00:08:40.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.777 "is_configured": false, 00:08:40.777 "data_offset": 2048, 00:08:40.777 "data_size": 63488 00:08:40.777 }, 00:08:40.777 { 00:08:40.777 "name": "pt2", 00:08:40.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.777 "is_configured": true, 00:08:40.777 "data_offset": 2048, 00:08:40.777 "data_size": 63488 00:08:40.777 } 00:08:40.777 ] 00:08:40.777 }' 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.777 18:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.347 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.347 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.347 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.347 [2024-12-06 18:05:53.299498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.348 [2024-12-06 18:05:53.299591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.348 [2024-12-06 18:05:53.299713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.348 [2024-12-06 18:05:53.299814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.348 [2024-12-06 18:05:53.299868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.348 [2024-12-06 18:05:53.363441] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.348 [2024-12-06 18:05:53.363523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.348 [2024-12-06 18:05:53.363549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:41.348 [2024-12-06 18:05:53.363560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.348 [2024-12-06 18:05:53.366141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.348 [2024-12-06 18:05:53.366184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.348 [2024-12-06 18:05:53.366295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:41.348 [2024-12-06 18:05:53.366347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.348 [2024-12-06 18:05:53.366504] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:41.348 [2024-12-06 18:05:53.366518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.348 [2024-12-06 18:05:53.366537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:41.348 [2024-12-06 18:05:53.366597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.348 [2024-12-06 18:05:53.366681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:41.348 [2024-12-06 18:05:53.366691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:41.348 [2024-12-06 18:05:53.366987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:41.348 [2024-12-06 18:05:53.367190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:41.348 [2024-12-06 18:05:53.367209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:41.348 [2024-12-06 18:05:53.367453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.348 pt1 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.348 "name": "raid_bdev1", 00:08:41.348 "uuid": "92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b", 00:08:41.348 "strip_size_kb": 0, 00:08:41.348 "state": "online", 00:08:41.348 "raid_level": "raid1", 00:08:41.348 "superblock": true, 00:08:41.348 "num_base_bdevs": 2, 00:08:41.348 "num_base_bdevs_discovered": 1, 00:08:41.348 "num_base_bdevs_operational": 1, 00:08:41.348 "base_bdevs_list": [ 00:08:41.348 { 00:08:41.348 "name": null, 00:08:41.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.348 "is_configured": false, 00:08:41.348 "data_offset": 2048, 00:08:41.348 "data_size": 63488 00:08:41.348 }, 00:08:41.348 { 00:08:41.348 "name": "pt2", 00:08:41.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.348 "is_configured": true, 00:08:41.348 "data_offset": 2048, 00:08:41.348 "data_size": 63488 00:08:41.348 } 00:08:41.348 ] 00:08:41.348 }' 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.348 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:41.919 [2024-12-06 18:05:53.851050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b '!=' 92c8f7d7-4bbc-4fd2-ac2e-a5c414e41b6b ']' 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63605 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63605 ']' 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63605 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63605 00:08:41.919 killing process with pid 63605 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63605' 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63605 00:08:41.919 [2024-12-06 18:05:53.928905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.919 18:05:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63605 00:08:41.919 [2024-12-06 18:05:53.929021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.919 [2024-12-06 18:05:53.929083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.919 [2024-12-06 18:05:53.929113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:42.178 [2024-12-06 18:05:54.167705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.553 ************************************ 00:08:43.553 END TEST raid_superblock_test 00:08:43.553 ************************************ 00:08:43.553 18:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:43.553 00:08:43.553 real 0m6.608s 00:08:43.553 user 0m9.970s 00:08:43.553 sys 0m1.086s 00:08:43.553 18:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.553 18:05:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.553 18:05:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:43.553 18:05:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:43.553 18:05:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.553 18:05:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.553 ************************************ 00:08:43.553 START TEST raid_read_error_test 00:08:43.553 ************************************ 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hqTZMNXD3i 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63942 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63942 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63942 ']' 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.553 18:05:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:43.553 [2024-12-06 18:05:55.658623] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:43.553 [2024-12-06 18:05:55.658885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63942 ] 00:08:43.812 [2024-12-06 18:05:55.842926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.071 [2024-12-06 18:05:55.980163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.071 [2024-12-06 18:05:56.223720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.071 [2024-12-06 18:05:56.223777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 BaseBdev1_malloc 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 true 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.638 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.638 [2024-12-06 18:05:56.606903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:44.638 [2024-12-06 18:05:56.606978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.638 [2024-12-06 18:05:56.607006] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:44.638 [2024-12-06 18:05:56.607020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.639 [2024-12-06 18:05:56.609610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.639 [2024-12-06 18:05:56.609662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:44.639 BaseBdev1 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.639 BaseBdev2_malloc 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.639 true 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.639 [2024-12-06 18:05:56.669120] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:44.639 [2024-12-06 18:05:56.669187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.639 [2024-12-06 18:05:56.669209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:44.639 [2024-12-06 18:05:56.669223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.639 [2024-12-06 18:05:56.671739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.639 [2024-12-06 18:05:56.671784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:44.639 BaseBdev2 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.639 [2024-12-06 18:05:56.677177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.639 [2024-12-06 18:05:56.679441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.639 [2024-12-06 18:05:56.679721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:44.639 [2024-12-06 18:05:56.679743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.639 [2024-12-06 18:05:56.680058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:44.639 [2024-12-06 18:05:56.680301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:44.639 [2024-12-06 18:05:56.680360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:44.639 [2024-12-06 18:05:56.680593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.639 "name": "raid_bdev1", 00:08:44.639 "uuid": "4008f557-8bd0-42a1-8c5b-f52448da1fd0", 00:08:44.639 "strip_size_kb": 0, 00:08:44.639 "state": "online", 00:08:44.639 "raid_level": "raid1", 00:08:44.639 "superblock": true, 00:08:44.639 "num_base_bdevs": 2, 00:08:44.639 "num_base_bdevs_discovered": 2, 00:08:44.639 "num_base_bdevs_operational": 2, 00:08:44.639 "base_bdevs_list": [ 00:08:44.639 { 00:08:44.639 "name": "BaseBdev1", 00:08:44.639 "uuid": "31aa2219-17ac-5470-9786-130f65fd565c", 00:08:44.639 "is_configured": true, 00:08:44.639 "data_offset": 2048, 00:08:44.639 "data_size": 63488 00:08:44.639 }, 00:08:44.639 { 00:08:44.639 "name": "BaseBdev2", 00:08:44.639 "uuid": "f17a9c46-6479-5c0e-ab5b-12749e7d3edc", 00:08:44.639 "is_configured": true, 00:08:44.639 "data_offset": 2048, 00:08:44.639 "data_size": 63488 00:08:44.639 } 00:08:44.639 ] 00:08:44.639 }' 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.639 18:05:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.205 18:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:45.205 18:05:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:45.205 [2024-12-06 18:05:57.241785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.162 "name": "raid_bdev1", 00:08:46.162 "uuid": "4008f557-8bd0-42a1-8c5b-f52448da1fd0", 00:08:46.162 "strip_size_kb": 0, 00:08:46.162 "state": "online", 00:08:46.162 "raid_level": "raid1", 00:08:46.162 "superblock": true, 00:08:46.162 "num_base_bdevs": 2, 00:08:46.162 "num_base_bdevs_discovered": 2, 00:08:46.162 "num_base_bdevs_operational": 2, 00:08:46.162 "base_bdevs_list": [ 00:08:46.162 { 00:08:46.162 "name": "BaseBdev1", 00:08:46.162 "uuid": "31aa2219-17ac-5470-9786-130f65fd565c", 00:08:46.162 "is_configured": true, 00:08:46.162 "data_offset": 2048, 00:08:46.162 "data_size": 63488 00:08:46.162 }, 00:08:46.162 { 00:08:46.162 "name": "BaseBdev2", 00:08:46.162 "uuid": "f17a9c46-6479-5c0e-ab5b-12749e7d3edc", 00:08:46.162 "is_configured": true, 00:08:46.162 "data_offset": 2048, 00:08:46.162 "data_size": 63488 00:08:46.162 } 00:08:46.162 ] 00:08:46.162 }' 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.162 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.728 [2024-12-06 18:05:58.627193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.728 [2024-12-06 18:05:58.627311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.728 [2024-12-06 18:05:58.630585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.728 [2024-12-06 18:05:58.630685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.728 [2024-12-06 18:05:58.630816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.728 [2024-12-06 18:05:58.630875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.728 { 00:08:46.728 "results": [ 00:08:46.728 { 00:08:46.728 "job": "raid_bdev1", 00:08:46.728 "core_mask": "0x1", 00:08:46.728 "workload": "randrw", 00:08:46.728 "percentage": 50, 00:08:46.728 "status": "finished", 00:08:46.728 "queue_depth": 1, 00:08:46.728 "io_size": 131072, 00:08:46.728 "runtime": 1.386195, 00:08:46.728 "iops": 15097.44300044366, 00:08:46.728 "mibps": 1887.1803750554575, 00:08:46.728 "io_failed": 0, 00:08:46.728 "io_timeout": 0, 00:08:46.728 "avg_latency_us": 62.96844944780524, 00:08:46.728 "min_latency_us": 26.1589519650655, 00:08:46.728 "max_latency_us": 1767.1825327510917 00:08:46.728 } 00:08:46.728 ], 00:08:46.728 "core_count": 1 00:08:46.728 } 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63942 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63942 ']' 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63942 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63942 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.728 killing process with pid 63942 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63942' 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63942 00:08:46.728 [2024-12-06 18:05:58.670703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.728 18:05:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63942 00:08:46.728 [2024-12-06 18:05:58.831667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hqTZMNXD3i 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:48.103 00:08:48.103 real 0m4.610s 00:08:48.103 user 0m5.537s 00:08:48.103 sys 0m0.590s 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.103 18:06:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.103 ************************************ 00:08:48.103 END TEST raid_read_error_test 00:08:48.103 ************************************ 00:08:48.103 18:06:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:48.103 18:06:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:48.103 18:06:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.103 18:06:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.103 ************************************ 00:08:48.103 START TEST raid_write_error_test 00:08:48.103 ************************************ 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dsJQVd9yCV 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64087 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64087 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64087 ']' 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.103 18:06:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.362 [2024-12-06 18:06:00.346201] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:48.362 [2024-12-06 18:06:00.346320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64087 ] 00:08:48.362 [2024-12-06 18:06:00.526581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.622 [2024-12-06 18:06:00.645467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.882 [2024-12-06 18:06:00.855618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.882 [2024-12-06 18:06:00.855688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.142 BaseBdev1_malloc 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.142 true 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.142 [2024-12-06 18:06:01.297235] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.142 [2024-12-06 18:06:01.297316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.142 [2024-12-06 18:06:01.297342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:49.142 [2024-12-06 18:06:01.297356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.142 [2024-12-06 18:06:01.299771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.142 [2024-12-06 18:06:01.299881] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.142 BaseBdev1 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.142 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 BaseBdev2_malloc 00:08:49.402 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.402 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:49.402 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.402 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 true 00:08:49.402 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.402 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:49.402 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.402 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.402 [2024-12-06 18:06:01.369326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:49.402 [2024-12-06 18:06:01.369434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.402 [2024-12-06 18:06:01.369470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:49.402 [2024-12-06 18:06:01.369486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.402 [2024-12-06 18:06:01.372479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.402 [2024-12-06 18:06:01.372535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:49.402 BaseBdev2 00:08:49.402 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.433 [2024-12-06 18:06:01.381451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.433 [2024-12-06 18:06:01.383615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.433 [2024-12-06 18:06:01.384005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.433 [2024-12-06 18:06:01.384032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:49.433 [2024-12-06 18:06:01.384419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:49.433 [2024-12-06 18:06:01.384646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.433 [2024-12-06 18:06:01.384659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:49.433 [2024-12-06 18:06:01.384865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.433 "name": "raid_bdev1", 00:08:49.433 "uuid": "07972f26-1de1-477a-ba61-987989bfd94c", 00:08:49.433 "strip_size_kb": 0, 00:08:49.433 "state": "online", 00:08:49.433 "raid_level": "raid1", 00:08:49.433 "superblock": true, 00:08:49.433 "num_base_bdevs": 2, 00:08:49.433 "num_base_bdevs_discovered": 2, 00:08:49.433 "num_base_bdevs_operational": 2, 00:08:49.433 "base_bdevs_list": [ 00:08:49.433 { 00:08:49.433 "name": "BaseBdev1", 00:08:49.433 "uuid": "dcd83d29-4e22-50ef-8d63-79312d8cc2cb", 00:08:49.433 "is_configured": true, 00:08:49.433 "data_offset": 2048, 00:08:49.433 "data_size": 63488 00:08:49.433 }, 00:08:49.433 { 00:08:49.433 "name": "BaseBdev2", 00:08:49.433 "uuid": "c1f423d1-2ee9-5bcc-bdbd-9873ed3d872a", 00:08:49.433 "is_configured": true, 00:08:49.433 "data_offset": 2048, 00:08:49.433 "data_size": 63488 00:08:49.433 } 00:08:49.433 ] 00:08:49.433 }' 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.433 18:06:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.002 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:50.002 18:06:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:50.002 [2024-12-06 18:06:01.965916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.940 [2024-12-06 18:06:02.871124] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:50.940 [2024-12-06 18:06:02.871276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.940 [2024-12-06 18:06:02.871500] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.940 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.940 "name": "raid_bdev1", 00:08:50.940 "uuid": "07972f26-1de1-477a-ba61-987989bfd94c", 00:08:50.940 "strip_size_kb": 0, 00:08:50.941 "state": "online", 00:08:50.941 "raid_level": "raid1", 00:08:50.941 "superblock": true, 00:08:50.941 "num_base_bdevs": 2, 00:08:50.941 "num_base_bdevs_discovered": 1, 00:08:50.941 "num_base_bdevs_operational": 1, 00:08:50.941 "base_bdevs_list": [ 00:08:50.941 { 00:08:50.941 "name": null, 00:08:50.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.941 "is_configured": false, 00:08:50.941 "data_offset": 0, 00:08:50.941 "data_size": 63488 00:08:50.941 }, 00:08:50.941 { 00:08:50.941 "name": "BaseBdev2", 00:08:50.941 "uuid": "c1f423d1-2ee9-5bcc-bdbd-9873ed3d872a", 00:08:50.941 "is_configured": true, 00:08:50.941 "data_offset": 2048, 00:08:50.941 "data_size": 63488 00:08:50.941 } 00:08:50.941 ] 00:08:50.941 }' 00:08:50.941 18:06:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.941 18:06:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.510 [2024-12-06 18:06:03.377620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.510 [2024-12-06 18:06:03.377745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.510 [2024-12-06 18:06:03.381172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.510 [2024-12-06 18:06:03.381262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.510 [2024-12-06 18:06:03.381373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.510 [2024-12-06 18:06:03.381431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:51.510 { 00:08:51.510 "results": [ 00:08:51.510 { 00:08:51.510 "job": "raid_bdev1", 00:08:51.510 "core_mask": "0x1", 00:08:51.510 "workload": "randrw", 00:08:51.510 "percentage": 50, 00:08:51.510 "status": "finished", 00:08:51.510 "queue_depth": 1, 00:08:51.510 "io_size": 131072, 00:08:51.510 "runtime": 1.412632, 00:08:51.510 "iops": 17941.686157470594, 00:08:51.510 "mibps": 2242.7107696838243, 00:08:51.510 "io_failed": 0, 00:08:51.510 "io_timeout": 0, 00:08:51.510 "avg_latency_us": 52.56171019838887, 00:08:51.510 "min_latency_us": 24.929257641921396, 00:08:51.510 "max_latency_us": 1667.0183406113538 00:08:51.510 } 00:08:51.510 ], 00:08:51.510 "core_count": 1 00:08:51.510 } 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64087 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64087 ']' 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64087 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64087 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64087' 00:08:51.510 killing process with pid 64087 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64087 00:08:51.510 [2024-12-06 18:06:03.413255] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.510 18:06:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64087 00:08:51.510 [2024-12-06 18:06:03.563621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dsJQVd9yCV 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:52.891 ************************************ 00:08:52.891 END TEST raid_write_error_test 00:08:52.891 ************************************ 00:08:52.891 00:08:52.891 real 0m4.654s 00:08:52.891 user 0m5.622s 00:08:52.891 sys 0m0.576s 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.891 18:06:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.891 18:06:04 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:52.891 18:06:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:52.891 18:06:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:52.891 18:06:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:52.891 18:06:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.891 18:06:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.891 ************************************ 00:08:52.891 START TEST raid_state_function_test 00:08:52.891 ************************************ 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:52.891 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64231 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64231' 00:08:52.892 Process raid pid: 64231 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64231 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64231 ']' 00:08:52.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.892 18:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.158 [2024-12-06 18:06:05.059227] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:08:53.158 [2024-12-06 18:06:05.059401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.158 [2024-12-06 18:06:05.240936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.418 [2024-12-06 18:06:05.370357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.681 [2024-12-06 18:06:05.599350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.681 [2024-12-06 18:06:05.599512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.944 [2024-12-06 18:06:05.955437] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.944 [2024-12-06 18:06:05.955501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.944 [2024-12-06 18:06:05.955514] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.944 [2024-12-06 18:06:05.955525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.944 [2024-12-06 18:06:05.955533] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.944 [2024-12-06 18:06:05.955542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.944 18:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.944 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.944 "name": "Existed_Raid", 00:08:53.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.944 "strip_size_kb": 64, 00:08:53.944 "state": "configuring", 00:08:53.944 "raid_level": "raid0", 00:08:53.944 "superblock": false, 00:08:53.944 "num_base_bdevs": 3, 00:08:53.944 "num_base_bdevs_discovered": 0, 00:08:53.944 "num_base_bdevs_operational": 3, 00:08:53.944 "base_bdevs_list": [ 00:08:53.944 { 00:08:53.945 "name": "BaseBdev1", 00:08:53.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.945 "is_configured": false, 00:08:53.945 "data_offset": 0, 00:08:53.945 "data_size": 0 00:08:53.945 }, 00:08:53.945 { 00:08:53.945 "name": "BaseBdev2", 00:08:53.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.945 "is_configured": false, 00:08:53.945 "data_offset": 0, 00:08:53.945 "data_size": 0 00:08:53.945 }, 00:08:53.945 { 00:08:53.945 "name": "BaseBdev3", 00:08:53.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.945 "is_configured": false, 00:08:53.945 "data_offset": 0, 00:08:53.945 "data_size": 0 00:08:53.945 } 00:08:53.945 ] 00:08:53.945 }' 00:08:53.945 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.945 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 [2024-12-06 18:06:06.422585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.514 [2024-12-06 18:06:06.422693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 [2024-12-06 18:06:06.430558] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.514 [2024-12-06 18:06:06.430657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.514 [2024-12-06 18:06:06.430693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.514 [2024-12-06 18:06:06.430720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.514 [2024-12-06 18:06:06.430751] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.514 [2024-12-06 18:06:06.430776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 [2024-12-06 18:06:06.481373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.514 BaseBdev1 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.514 [ 00:08:54.514 { 00:08:54.514 "name": "BaseBdev1", 00:08:54.514 "aliases": [ 00:08:54.514 "d17a1fda-61cc-47fa-9d49-3d49706dc657" 00:08:54.514 ], 00:08:54.514 "product_name": "Malloc disk", 00:08:54.514 "block_size": 512, 00:08:54.514 "num_blocks": 65536, 00:08:54.514 "uuid": "d17a1fda-61cc-47fa-9d49-3d49706dc657", 00:08:54.514 "assigned_rate_limits": { 00:08:54.514 "rw_ios_per_sec": 0, 00:08:54.514 "rw_mbytes_per_sec": 0, 00:08:54.514 "r_mbytes_per_sec": 0, 00:08:54.514 "w_mbytes_per_sec": 0 00:08:54.514 }, 00:08:54.514 "claimed": true, 00:08:54.514 "claim_type": "exclusive_write", 00:08:54.514 "zoned": false, 00:08:54.514 "supported_io_types": { 00:08:54.514 "read": true, 00:08:54.514 "write": true, 00:08:54.514 "unmap": true, 00:08:54.514 "flush": true, 00:08:54.514 "reset": true, 00:08:54.514 "nvme_admin": false, 00:08:54.514 "nvme_io": false, 00:08:54.514 "nvme_io_md": false, 00:08:54.514 "write_zeroes": true, 00:08:54.514 "zcopy": true, 00:08:54.514 "get_zone_info": false, 00:08:54.514 "zone_management": false, 00:08:54.514 "zone_append": false, 00:08:54.514 "compare": false, 00:08:54.514 "compare_and_write": false, 00:08:54.514 "abort": true, 00:08:54.514 "seek_hole": false, 00:08:54.514 "seek_data": false, 00:08:54.514 "copy": true, 00:08:54.514 "nvme_iov_md": false 00:08:54.514 }, 00:08:54.514 "memory_domains": [ 00:08:54.514 { 00:08:54.514 "dma_device_id": "system", 00:08:54.514 "dma_device_type": 1 00:08:54.514 }, 00:08:54.514 { 00:08:54.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.514 "dma_device_type": 2 00:08:54.514 } 00:08:54.514 ], 00:08:54.514 "driver_specific": {} 00:08:54.514 } 00:08:54.514 ] 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.514 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.515 "name": "Existed_Raid", 00:08:54.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.515 "strip_size_kb": 64, 00:08:54.515 "state": "configuring", 00:08:54.515 "raid_level": "raid0", 00:08:54.515 "superblock": false, 00:08:54.515 "num_base_bdevs": 3, 00:08:54.515 "num_base_bdevs_discovered": 1, 00:08:54.515 "num_base_bdevs_operational": 3, 00:08:54.515 "base_bdevs_list": [ 00:08:54.515 { 00:08:54.515 "name": "BaseBdev1", 00:08:54.515 "uuid": "d17a1fda-61cc-47fa-9d49-3d49706dc657", 00:08:54.515 "is_configured": true, 00:08:54.515 "data_offset": 0, 00:08:54.515 "data_size": 65536 00:08:54.515 }, 00:08:54.515 { 00:08:54.515 "name": "BaseBdev2", 00:08:54.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.515 "is_configured": false, 00:08:54.515 "data_offset": 0, 00:08:54.515 "data_size": 0 00:08:54.515 }, 00:08:54.515 { 00:08:54.515 "name": "BaseBdev3", 00:08:54.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.515 "is_configured": false, 00:08:54.515 "data_offset": 0, 00:08:54.515 "data_size": 0 00:08:54.515 } 00:08:54.515 ] 00:08:54.515 }' 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.515 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.086 [2024-12-06 18:06:06.964629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.086 [2024-12-06 18:06:06.964689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.086 [2024-12-06 18:06:06.976724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.086 [2024-12-06 18:06:06.978751] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.086 [2024-12-06 18:06:06.978866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.086 [2024-12-06 18:06:06.978884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.086 [2024-12-06 18:06:06.978895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.086 18:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.086 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.086 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.086 "name": "Existed_Raid", 00:08:55.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.086 "strip_size_kb": 64, 00:08:55.086 "state": "configuring", 00:08:55.086 "raid_level": "raid0", 00:08:55.086 "superblock": false, 00:08:55.086 "num_base_bdevs": 3, 00:08:55.086 "num_base_bdevs_discovered": 1, 00:08:55.086 "num_base_bdevs_operational": 3, 00:08:55.086 "base_bdevs_list": [ 00:08:55.086 { 00:08:55.086 "name": "BaseBdev1", 00:08:55.086 "uuid": "d17a1fda-61cc-47fa-9d49-3d49706dc657", 00:08:55.086 "is_configured": true, 00:08:55.086 "data_offset": 0, 00:08:55.086 "data_size": 65536 00:08:55.086 }, 00:08:55.086 { 00:08:55.086 "name": "BaseBdev2", 00:08:55.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.086 "is_configured": false, 00:08:55.086 "data_offset": 0, 00:08:55.086 "data_size": 0 00:08:55.086 }, 00:08:55.086 { 00:08:55.086 "name": "BaseBdev3", 00:08:55.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.086 "is_configured": false, 00:08:55.086 "data_offset": 0, 00:08:55.086 "data_size": 0 00:08:55.086 } 00:08:55.086 ] 00:08:55.086 }' 00:08:55.086 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.086 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.346 [2024-12-06 18:06:07.468234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.346 BaseBdev2 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.346 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.346 [ 00:08:55.346 { 00:08:55.346 "name": "BaseBdev2", 00:08:55.346 "aliases": [ 00:08:55.346 "5f5c81b7-763e-4486-8c75-7219a9d4bde4" 00:08:55.346 ], 00:08:55.346 "product_name": "Malloc disk", 00:08:55.346 "block_size": 512, 00:08:55.346 "num_blocks": 65536, 00:08:55.346 "uuid": "5f5c81b7-763e-4486-8c75-7219a9d4bde4", 00:08:55.346 "assigned_rate_limits": { 00:08:55.346 "rw_ios_per_sec": 0, 00:08:55.346 "rw_mbytes_per_sec": 0, 00:08:55.346 "r_mbytes_per_sec": 0, 00:08:55.347 "w_mbytes_per_sec": 0 00:08:55.347 }, 00:08:55.347 "claimed": true, 00:08:55.347 "claim_type": "exclusive_write", 00:08:55.347 "zoned": false, 00:08:55.347 "supported_io_types": { 00:08:55.347 "read": true, 00:08:55.347 "write": true, 00:08:55.347 "unmap": true, 00:08:55.347 "flush": true, 00:08:55.347 "reset": true, 00:08:55.347 "nvme_admin": false, 00:08:55.347 "nvme_io": false, 00:08:55.347 "nvme_io_md": false, 00:08:55.347 "write_zeroes": true, 00:08:55.347 "zcopy": true, 00:08:55.347 "get_zone_info": false, 00:08:55.347 "zone_management": false, 00:08:55.347 "zone_append": false, 00:08:55.347 "compare": false, 00:08:55.347 "compare_and_write": false, 00:08:55.347 "abort": true, 00:08:55.347 "seek_hole": false, 00:08:55.347 "seek_data": false, 00:08:55.347 "copy": true, 00:08:55.347 "nvme_iov_md": false 00:08:55.347 }, 00:08:55.347 "memory_domains": [ 00:08:55.347 { 00:08:55.347 "dma_device_id": "system", 00:08:55.347 "dma_device_type": 1 00:08:55.347 }, 00:08:55.347 { 00:08:55.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.347 "dma_device_type": 2 00:08:55.347 } 00:08:55.347 ], 00:08:55.347 "driver_specific": {} 00:08:55.347 } 00:08:55.347 ] 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.347 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.606 "name": "Existed_Raid", 00:08:55.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.606 "strip_size_kb": 64, 00:08:55.606 "state": "configuring", 00:08:55.606 "raid_level": "raid0", 00:08:55.606 "superblock": false, 00:08:55.606 "num_base_bdevs": 3, 00:08:55.606 "num_base_bdevs_discovered": 2, 00:08:55.606 "num_base_bdevs_operational": 3, 00:08:55.606 "base_bdevs_list": [ 00:08:55.606 { 00:08:55.606 "name": "BaseBdev1", 00:08:55.606 "uuid": "d17a1fda-61cc-47fa-9d49-3d49706dc657", 00:08:55.606 "is_configured": true, 00:08:55.606 "data_offset": 0, 00:08:55.606 "data_size": 65536 00:08:55.606 }, 00:08:55.606 { 00:08:55.606 "name": "BaseBdev2", 00:08:55.606 "uuid": "5f5c81b7-763e-4486-8c75-7219a9d4bde4", 00:08:55.606 "is_configured": true, 00:08:55.606 "data_offset": 0, 00:08:55.606 "data_size": 65536 00:08:55.606 }, 00:08:55.606 { 00:08:55.606 "name": "BaseBdev3", 00:08:55.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.606 "is_configured": false, 00:08:55.606 "data_offset": 0, 00:08:55.606 "data_size": 0 00:08:55.606 } 00:08:55.606 ] 00:08:55.606 }' 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.606 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.865 18:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:55.865 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.865 18:06:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.865 [2024-12-06 18:06:08.008837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.865 [2024-12-06 18:06:08.008887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.865 [2024-12-06 18:06:08.008902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:55.865 [2024-12-06 18:06:08.009196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:55.865 [2024-12-06 18:06:08.009378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.865 [2024-12-06 18:06:08.009389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:55.865 [2024-12-06 18:06:08.009684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.865 BaseBdev3 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.865 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.125 [ 00:08:56.125 { 00:08:56.125 "name": "BaseBdev3", 00:08:56.125 "aliases": [ 00:08:56.125 "95df8c89-fcbc-475d-9620-4e77c2972c2c" 00:08:56.125 ], 00:08:56.125 "product_name": "Malloc disk", 00:08:56.125 "block_size": 512, 00:08:56.125 "num_blocks": 65536, 00:08:56.125 "uuid": "95df8c89-fcbc-475d-9620-4e77c2972c2c", 00:08:56.125 "assigned_rate_limits": { 00:08:56.125 "rw_ios_per_sec": 0, 00:08:56.125 "rw_mbytes_per_sec": 0, 00:08:56.125 "r_mbytes_per_sec": 0, 00:08:56.125 "w_mbytes_per_sec": 0 00:08:56.125 }, 00:08:56.125 "claimed": true, 00:08:56.125 "claim_type": "exclusive_write", 00:08:56.125 "zoned": false, 00:08:56.125 "supported_io_types": { 00:08:56.125 "read": true, 00:08:56.125 "write": true, 00:08:56.125 "unmap": true, 00:08:56.125 "flush": true, 00:08:56.125 "reset": true, 00:08:56.125 "nvme_admin": false, 00:08:56.125 "nvme_io": false, 00:08:56.125 "nvme_io_md": false, 00:08:56.125 "write_zeroes": true, 00:08:56.125 "zcopy": true, 00:08:56.125 "get_zone_info": false, 00:08:56.125 "zone_management": false, 00:08:56.125 "zone_append": false, 00:08:56.125 "compare": false, 00:08:56.125 "compare_and_write": false, 00:08:56.125 "abort": true, 00:08:56.125 "seek_hole": false, 00:08:56.125 "seek_data": false, 00:08:56.125 "copy": true, 00:08:56.125 "nvme_iov_md": false 00:08:56.125 }, 00:08:56.125 "memory_domains": [ 00:08:56.125 { 00:08:56.125 "dma_device_id": "system", 00:08:56.125 "dma_device_type": 1 00:08:56.125 }, 00:08:56.125 { 00:08:56.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.125 "dma_device_type": 2 00:08:56.125 } 00:08:56.125 ], 00:08:56.125 "driver_specific": {} 00:08:56.125 } 00:08:56.125 ] 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.125 "name": "Existed_Raid", 00:08:56.125 "uuid": "a740bf4c-1b7b-43f0-aade-3e0827384687", 00:08:56.125 "strip_size_kb": 64, 00:08:56.125 "state": "online", 00:08:56.125 "raid_level": "raid0", 00:08:56.125 "superblock": false, 00:08:56.125 "num_base_bdevs": 3, 00:08:56.125 "num_base_bdevs_discovered": 3, 00:08:56.125 "num_base_bdevs_operational": 3, 00:08:56.125 "base_bdevs_list": [ 00:08:56.125 { 00:08:56.125 "name": "BaseBdev1", 00:08:56.125 "uuid": "d17a1fda-61cc-47fa-9d49-3d49706dc657", 00:08:56.125 "is_configured": true, 00:08:56.125 "data_offset": 0, 00:08:56.125 "data_size": 65536 00:08:56.125 }, 00:08:56.125 { 00:08:56.125 "name": "BaseBdev2", 00:08:56.125 "uuid": "5f5c81b7-763e-4486-8c75-7219a9d4bde4", 00:08:56.125 "is_configured": true, 00:08:56.125 "data_offset": 0, 00:08:56.125 "data_size": 65536 00:08:56.125 }, 00:08:56.125 { 00:08:56.125 "name": "BaseBdev3", 00:08:56.125 "uuid": "95df8c89-fcbc-475d-9620-4e77c2972c2c", 00:08:56.125 "is_configured": true, 00:08:56.125 "data_offset": 0, 00:08:56.125 "data_size": 65536 00:08:56.125 } 00:08:56.125 ] 00:08:56.125 }' 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.125 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.384 [2024-12-06 18:06:08.464558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.384 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.384 "name": "Existed_Raid", 00:08:56.384 "aliases": [ 00:08:56.384 "a740bf4c-1b7b-43f0-aade-3e0827384687" 00:08:56.384 ], 00:08:56.384 "product_name": "Raid Volume", 00:08:56.384 "block_size": 512, 00:08:56.384 "num_blocks": 196608, 00:08:56.385 "uuid": "a740bf4c-1b7b-43f0-aade-3e0827384687", 00:08:56.385 "assigned_rate_limits": { 00:08:56.385 "rw_ios_per_sec": 0, 00:08:56.385 "rw_mbytes_per_sec": 0, 00:08:56.385 "r_mbytes_per_sec": 0, 00:08:56.385 "w_mbytes_per_sec": 0 00:08:56.385 }, 00:08:56.385 "claimed": false, 00:08:56.385 "zoned": false, 00:08:56.385 "supported_io_types": { 00:08:56.385 "read": true, 00:08:56.385 "write": true, 00:08:56.385 "unmap": true, 00:08:56.385 "flush": true, 00:08:56.385 "reset": true, 00:08:56.385 "nvme_admin": false, 00:08:56.385 "nvme_io": false, 00:08:56.385 "nvme_io_md": false, 00:08:56.385 "write_zeroes": true, 00:08:56.385 "zcopy": false, 00:08:56.385 "get_zone_info": false, 00:08:56.385 "zone_management": false, 00:08:56.385 "zone_append": false, 00:08:56.385 "compare": false, 00:08:56.385 "compare_and_write": false, 00:08:56.385 "abort": false, 00:08:56.385 "seek_hole": false, 00:08:56.385 "seek_data": false, 00:08:56.385 "copy": false, 00:08:56.385 "nvme_iov_md": false 00:08:56.385 }, 00:08:56.385 "memory_domains": [ 00:08:56.385 { 00:08:56.385 "dma_device_id": "system", 00:08:56.385 "dma_device_type": 1 00:08:56.385 }, 00:08:56.385 { 00:08:56.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.385 "dma_device_type": 2 00:08:56.385 }, 00:08:56.385 { 00:08:56.385 "dma_device_id": "system", 00:08:56.385 "dma_device_type": 1 00:08:56.385 }, 00:08:56.385 { 00:08:56.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.385 "dma_device_type": 2 00:08:56.385 }, 00:08:56.385 { 00:08:56.385 "dma_device_id": "system", 00:08:56.385 "dma_device_type": 1 00:08:56.385 }, 00:08:56.385 { 00:08:56.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.385 "dma_device_type": 2 00:08:56.385 } 00:08:56.385 ], 00:08:56.385 "driver_specific": { 00:08:56.385 "raid": { 00:08:56.385 "uuid": "a740bf4c-1b7b-43f0-aade-3e0827384687", 00:08:56.385 "strip_size_kb": 64, 00:08:56.385 "state": "online", 00:08:56.385 "raid_level": "raid0", 00:08:56.385 "superblock": false, 00:08:56.385 "num_base_bdevs": 3, 00:08:56.385 "num_base_bdevs_discovered": 3, 00:08:56.385 "num_base_bdevs_operational": 3, 00:08:56.385 "base_bdevs_list": [ 00:08:56.385 { 00:08:56.385 "name": "BaseBdev1", 00:08:56.385 "uuid": "d17a1fda-61cc-47fa-9d49-3d49706dc657", 00:08:56.385 "is_configured": true, 00:08:56.385 "data_offset": 0, 00:08:56.385 "data_size": 65536 00:08:56.385 }, 00:08:56.385 { 00:08:56.385 "name": "BaseBdev2", 00:08:56.385 "uuid": "5f5c81b7-763e-4486-8c75-7219a9d4bde4", 00:08:56.385 "is_configured": true, 00:08:56.385 "data_offset": 0, 00:08:56.385 "data_size": 65536 00:08:56.385 }, 00:08:56.385 { 00:08:56.385 "name": "BaseBdev3", 00:08:56.385 "uuid": "95df8c89-fcbc-475d-9620-4e77c2972c2c", 00:08:56.385 "is_configured": true, 00:08:56.385 "data_offset": 0, 00:08:56.385 "data_size": 65536 00:08:56.385 } 00:08:56.385 ] 00:08:56.385 } 00:08:56.385 } 00:08:56.385 }' 00:08:56.385 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.385 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:56.385 BaseBdev2 00:08:56.385 BaseBdev3' 00:08:56.385 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.385 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.385 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.385 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:56.385 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.385 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.385 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.643 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.644 [2024-12-06 18:06:08.683854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.644 [2024-12-06 18:06:08.683944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.644 [2024-12-06 18:06:08.684021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.644 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.902 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.902 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.902 "name": "Existed_Raid", 00:08:56.902 "uuid": "a740bf4c-1b7b-43f0-aade-3e0827384687", 00:08:56.902 "strip_size_kb": 64, 00:08:56.902 "state": "offline", 00:08:56.902 "raid_level": "raid0", 00:08:56.902 "superblock": false, 00:08:56.902 "num_base_bdevs": 3, 00:08:56.902 "num_base_bdevs_discovered": 2, 00:08:56.902 "num_base_bdevs_operational": 2, 00:08:56.902 "base_bdevs_list": [ 00:08:56.902 { 00:08:56.902 "name": null, 00:08:56.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.902 "is_configured": false, 00:08:56.902 "data_offset": 0, 00:08:56.902 "data_size": 65536 00:08:56.902 }, 00:08:56.902 { 00:08:56.902 "name": "BaseBdev2", 00:08:56.902 "uuid": "5f5c81b7-763e-4486-8c75-7219a9d4bde4", 00:08:56.902 "is_configured": true, 00:08:56.902 "data_offset": 0, 00:08:56.902 "data_size": 65536 00:08:56.902 }, 00:08:56.902 { 00:08:56.902 "name": "BaseBdev3", 00:08:56.902 "uuid": "95df8c89-fcbc-475d-9620-4e77c2972c2c", 00:08:56.902 "is_configured": true, 00:08:56.902 "data_offset": 0, 00:08:56.902 "data_size": 65536 00:08:56.902 } 00:08:56.902 ] 00:08:56.902 }' 00:08:56.902 18:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.902 18:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.161 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.161 [2024-12-06 18:06:09.315606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.423 [2024-12-06 18:06:09.481881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:57.423 [2024-12-06 18:06:09.481938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.423 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.424 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 BaseBdev2 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 [ 00:08:57.683 { 00:08:57.683 "name": "BaseBdev2", 00:08:57.683 "aliases": [ 00:08:57.683 "817d3c5b-6f54-42f4-a138-c5912599099e" 00:08:57.683 ], 00:08:57.683 "product_name": "Malloc disk", 00:08:57.683 "block_size": 512, 00:08:57.683 "num_blocks": 65536, 00:08:57.683 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:08:57.683 "assigned_rate_limits": { 00:08:57.683 "rw_ios_per_sec": 0, 00:08:57.683 "rw_mbytes_per_sec": 0, 00:08:57.683 "r_mbytes_per_sec": 0, 00:08:57.683 "w_mbytes_per_sec": 0 00:08:57.683 }, 00:08:57.683 "claimed": false, 00:08:57.683 "zoned": false, 00:08:57.683 "supported_io_types": { 00:08:57.683 "read": true, 00:08:57.683 "write": true, 00:08:57.683 "unmap": true, 00:08:57.683 "flush": true, 00:08:57.683 "reset": true, 00:08:57.683 "nvme_admin": false, 00:08:57.683 "nvme_io": false, 00:08:57.683 "nvme_io_md": false, 00:08:57.683 "write_zeroes": true, 00:08:57.683 "zcopy": true, 00:08:57.683 "get_zone_info": false, 00:08:57.683 "zone_management": false, 00:08:57.683 "zone_append": false, 00:08:57.683 "compare": false, 00:08:57.683 "compare_and_write": false, 00:08:57.683 "abort": true, 00:08:57.683 "seek_hole": false, 00:08:57.683 "seek_data": false, 00:08:57.683 "copy": true, 00:08:57.683 "nvme_iov_md": false 00:08:57.683 }, 00:08:57.683 "memory_domains": [ 00:08:57.683 { 00:08:57.683 "dma_device_id": "system", 00:08:57.683 "dma_device_type": 1 00:08:57.683 }, 00:08:57.683 { 00:08:57.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.683 "dma_device_type": 2 00:08:57.683 } 00:08:57.683 ], 00:08:57.683 "driver_specific": {} 00:08:57.683 } 00:08:57.683 ] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 BaseBdev3 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 [ 00:08:57.683 { 00:08:57.683 "name": "BaseBdev3", 00:08:57.683 "aliases": [ 00:08:57.683 "f74be83a-a096-4132-885f-2e4d30def1ff" 00:08:57.683 ], 00:08:57.683 "product_name": "Malloc disk", 00:08:57.683 "block_size": 512, 00:08:57.683 "num_blocks": 65536, 00:08:57.683 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:08:57.683 "assigned_rate_limits": { 00:08:57.683 "rw_ios_per_sec": 0, 00:08:57.683 "rw_mbytes_per_sec": 0, 00:08:57.683 "r_mbytes_per_sec": 0, 00:08:57.683 "w_mbytes_per_sec": 0 00:08:57.683 }, 00:08:57.683 "claimed": false, 00:08:57.683 "zoned": false, 00:08:57.683 "supported_io_types": { 00:08:57.683 "read": true, 00:08:57.683 "write": true, 00:08:57.683 "unmap": true, 00:08:57.683 "flush": true, 00:08:57.683 "reset": true, 00:08:57.683 "nvme_admin": false, 00:08:57.683 "nvme_io": false, 00:08:57.683 "nvme_io_md": false, 00:08:57.683 "write_zeroes": true, 00:08:57.683 "zcopy": true, 00:08:57.683 "get_zone_info": false, 00:08:57.683 "zone_management": false, 00:08:57.683 "zone_append": false, 00:08:57.683 "compare": false, 00:08:57.683 "compare_and_write": false, 00:08:57.683 "abort": true, 00:08:57.683 "seek_hole": false, 00:08:57.683 "seek_data": false, 00:08:57.683 "copy": true, 00:08:57.683 "nvme_iov_md": false 00:08:57.683 }, 00:08:57.683 "memory_domains": [ 00:08:57.683 { 00:08:57.683 "dma_device_id": "system", 00:08:57.683 "dma_device_type": 1 00:08:57.683 }, 00:08:57.683 { 00:08:57.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.683 "dma_device_type": 2 00:08:57.683 } 00:08:57.683 ], 00:08:57.683 "driver_specific": {} 00:08:57.683 } 00:08:57.683 ] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 [2024-12-06 18:06:09.812727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.683 [2024-12-06 18:06:09.812838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.683 [2024-12-06 18:06:09.812894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.683 [2024-12-06 18:06:09.814972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.683 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.942 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.942 "name": "Existed_Raid", 00:08:57.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.942 "strip_size_kb": 64, 00:08:57.942 "state": "configuring", 00:08:57.942 "raid_level": "raid0", 00:08:57.942 "superblock": false, 00:08:57.942 "num_base_bdevs": 3, 00:08:57.942 "num_base_bdevs_discovered": 2, 00:08:57.942 "num_base_bdevs_operational": 3, 00:08:57.942 "base_bdevs_list": [ 00:08:57.942 { 00:08:57.942 "name": "BaseBdev1", 00:08:57.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.942 "is_configured": false, 00:08:57.942 "data_offset": 0, 00:08:57.942 "data_size": 0 00:08:57.942 }, 00:08:57.942 { 00:08:57.942 "name": "BaseBdev2", 00:08:57.942 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:08:57.942 "is_configured": true, 00:08:57.942 "data_offset": 0, 00:08:57.942 "data_size": 65536 00:08:57.942 }, 00:08:57.942 { 00:08:57.942 "name": "BaseBdev3", 00:08:57.942 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:08:57.942 "is_configured": true, 00:08:57.942 "data_offset": 0, 00:08:57.942 "data_size": 65536 00:08:57.942 } 00:08:57.942 ] 00:08:57.942 }' 00:08:57.942 18:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.942 18:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.202 [2024-12-06 18:06:10.247996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.202 "name": "Existed_Raid", 00:08:58.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.202 "strip_size_kb": 64, 00:08:58.202 "state": "configuring", 00:08:58.202 "raid_level": "raid0", 00:08:58.202 "superblock": false, 00:08:58.202 "num_base_bdevs": 3, 00:08:58.202 "num_base_bdevs_discovered": 1, 00:08:58.202 "num_base_bdevs_operational": 3, 00:08:58.202 "base_bdevs_list": [ 00:08:58.202 { 00:08:58.202 "name": "BaseBdev1", 00:08:58.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.202 "is_configured": false, 00:08:58.202 "data_offset": 0, 00:08:58.202 "data_size": 0 00:08:58.202 }, 00:08:58.202 { 00:08:58.202 "name": null, 00:08:58.202 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:08:58.202 "is_configured": false, 00:08:58.202 "data_offset": 0, 00:08:58.202 "data_size": 65536 00:08:58.202 }, 00:08:58.202 { 00:08:58.202 "name": "BaseBdev3", 00:08:58.202 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:08:58.202 "is_configured": true, 00:08:58.202 "data_offset": 0, 00:08:58.202 "data_size": 65536 00:08:58.202 } 00:08:58.202 ] 00:08:58.202 }' 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.202 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.897 [2024-12-06 18:06:10.796747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.897 BaseBdev1 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.897 [ 00:08:58.897 { 00:08:58.897 "name": "BaseBdev1", 00:08:58.897 "aliases": [ 00:08:58.897 "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d" 00:08:58.897 ], 00:08:58.897 "product_name": "Malloc disk", 00:08:58.897 "block_size": 512, 00:08:58.897 "num_blocks": 65536, 00:08:58.897 "uuid": "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d", 00:08:58.897 "assigned_rate_limits": { 00:08:58.897 "rw_ios_per_sec": 0, 00:08:58.897 "rw_mbytes_per_sec": 0, 00:08:58.897 "r_mbytes_per_sec": 0, 00:08:58.897 "w_mbytes_per_sec": 0 00:08:58.897 }, 00:08:58.897 "claimed": true, 00:08:58.897 "claim_type": "exclusive_write", 00:08:58.897 "zoned": false, 00:08:58.897 "supported_io_types": { 00:08:58.897 "read": true, 00:08:58.897 "write": true, 00:08:58.897 "unmap": true, 00:08:58.897 "flush": true, 00:08:58.897 "reset": true, 00:08:58.897 "nvme_admin": false, 00:08:58.897 "nvme_io": false, 00:08:58.897 "nvme_io_md": false, 00:08:58.897 "write_zeroes": true, 00:08:58.897 "zcopy": true, 00:08:58.897 "get_zone_info": false, 00:08:58.897 "zone_management": false, 00:08:58.897 "zone_append": false, 00:08:58.897 "compare": false, 00:08:58.897 "compare_and_write": false, 00:08:58.897 "abort": true, 00:08:58.897 "seek_hole": false, 00:08:58.897 "seek_data": false, 00:08:58.897 "copy": true, 00:08:58.897 "nvme_iov_md": false 00:08:58.897 }, 00:08:58.897 "memory_domains": [ 00:08:58.897 { 00:08:58.897 "dma_device_id": "system", 00:08:58.897 "dma_device_type": 1 00:08:58.897 }, 00:08:58.897 { 00:08:58.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.897 "dma_device_type": 2 00:08:58.897 } 00:08:58.897 ], 00:08:58.897 "driver_specific": {} 00:08:58.897 } 00:08:58.897 ] 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.897 "name": "Existed_Raid", 00:08:58.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.897 "strip_size_kb": 64, 00:08:58.897 "state": "configuring", 00:08:58.897 "raid_level": "raid0", 00:08:58.897 "superblock": false, 00:08:58.897 "num_base_bdevs": 3, 00:08:58.897 "num_base_bdevs_discovered": 2, 00:08:58.897 "num_base_bdevs_operational": 3, 00:08:58.897 "base_bdevs_list": [ 00:08:58.897 { 00:08:58.897 "name": "BaseBdev1", 00:08:58.897 "uuid": "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d", 00:08:58.897 "is_configured": true, 00:08:58.897 "data_offset": 0, 00:08:58.897 "data_size": 65536 00:08:58.897 }, 00:08:58.897 { 00:08:58.897 "name": null, 00:08:58.897 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:08:58.897 "is_configured": false, 00:08:58.897 "data_offset": 0, 00:08:58.897 "data_size": 65536 00:08:58.897 }, 00:08:58.897 { 00:08:58.897 "name": "BaseBdev3", 00:08:58.897 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:08:58.897 "is_configured": true, 00:08:58.897 "data_offset": 0, 00:08:58.897 "data_size": 65536 00:08:58.897 } 00:08:58.897 ] 00:08:58.897 }' 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.897 18:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.156 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.156 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.156 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.156 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.156 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.415 [2024-12-06 18:06:11.343970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.415 "name": "Existed_Raid", 00:08:59.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.415 "strip_size_kb": 64, 00:08:59.415 "state": "configuring", 00:08:59.415 "raid_level": "raid0", 00:08:59.415 "superblock": false, 00:08:59.415 "num_base_bdevs": 3, 00:08:59.415 "num_base_bdevs_discovered": 1, 00:08:59.415 "num_base_bdevs_operational": 3, 00:08:59.415 "base_bdevs_list": [ 00:08:59.415 { 00:08:59.415 "name": "BaseBdev1", 00:08:59.415 "uuid": "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d", 00:08:59.415 "is_configured": true, 00:08:59.415 "data_offset": 0, 00:08:59.415 "data_size": 65536 00:08:59.415 }, 00:08:59.415 { 00:08:59.415 "name": null, 00:08:59.415 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:08:59.415 "is_configured": false, 00:08:59.415 "data_offset": 0, 00:08:59.415 "data_size": 65536 00:08:59.415 }, 00:08:59.415 { 00:08:59.415 "name": null, 00:08:59.415 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:08:59.415 "is_configured": false, 00:08:59.415 "data_offset": 0, 00:08:59.415 "data_size": 65536 00:08:59.415 } 00:08:59.415 ] 00:08:59.415 }' 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.415 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.674 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.674 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:59.674 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.674 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.674 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.934 [2024-12-06 18:06:11.867130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.934 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.935 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.935 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.935 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.935 "name": "Existed_Raid", 00:08:59.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.935 "strip_size_kb": 64, 00:08:59.935 "state": "configuring", 00:08:59.935 "raid_level": "raid0", 00:08:59.935 "superblock": false, 00:08:59.935 "num_base_bdevs": 3, 00:08:59.935 "num_base_bdevs_discovered": 2, 00:08:59.935 "num_base_bdevs_operational": 3, 00:08:59.935 "base_bdevs_list": [ 00:08:59.935 { 00:08:59.935 "name": "BaseBdev1", 00:08:59.935 "uuid": "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d", 00:08:59.935 "is_configured": true, 00:08:59.935 "data_offset": 0, 00:08:59.935 "data_size": 65536 00:08:59.935 }, 00:08:59.935 { 00:08:59.935 "name": null, 00:08:59.935 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:08:59.935 "is_configured": false, 00:08:59.935 "data_offset": 0, 00:08:59.935 "data_size": 65536 00:08:59.935 }, 00:08:59.935 { 00:08:59.935 "name": "BaseBdev3", 00:08:59.935 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:08:59.935 "is_configured": true, 00:08:59.935 "data_offset": 0, 00:08:59.935 "data_size": 65536 00:08:59.935 } 00:08:59.935 ] 00:08:59.935 }' 00:08:59.935 18:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.935 18:06:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.504 [2024-12-06 18:06:12.414205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.504 "name": "Existed_Raid", 00:09:00.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.504 "strip_size_kb": 64, 00:09:00.504 "state": "configuring", 00:09:00.504 "raid_level": "raid0", 00:09:00.504 "superblock": false, 00:09:00.504 "num_base_bdevs": 3, 00:09:00.504 "num_base_bdevs_discovered": 1, 00:09:00.504 "num_base_bdevs_operational": 3, 00:09:00.504 "base_bdevs_list": [ 00:09:00.504 { 00:09:00.504 "name": null, 00:09:00.504 "uuid": "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d", 00:09:00.504 "is_configured": false, 00:09:00.504 "data_offset": 0, 00:09:00.504 "data_size": 65536 00:09:00.504 }, 00:09:00.504 { 00:09:00.504 "name": null, 00:09:00.504 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:09:00.504 "is_configured": false, 00:09:00.504 "data_offset": 0, 00:09:00.504 "data_size": 65536 00:09:00.504 }, 00:09:00.504 { 00:09:00.504 "name": "BaseBdev3", 00:09:00.504 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:09:00.504 "is_configured": true, 00:09:00.504 "data_offset": 0, 00:09:00.504 "data_size": 65536 00:09:00.504 } 00:09:00.504 ] 00:09:00.504 }' 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.504 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.078 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.078 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.078 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.078 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.078 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.078 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:01.078 18:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:01.078 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.078 18:06:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.078 [2024-12-06 18:06:13.008109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.078 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.078 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.078 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.079 "name": "Existed_Raid", 00:09:01.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.079 "strip_size_kb": 64, 00:09:01.079 "state": "configuring", 00:09:01.079 "raid_level": "raid0", 00:09:01.079 "superblock": false, 00:09:01.079 "num_base_bdevs": 3, 00:09:01.079 "num_base_bdevs_discovered": 2, 00:09:01.079 "num_base_bdevs_operational": 3, 00:09:01.079 "base_bdevs_list": [ 00:09:01.079 { 00:09:01.079 "name": null, 00:09:01.079 "uuid": "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d", 00:09:01.079 "is_configured": false, 00:09:01.079 "data_offset": 0, 00:09:01.079 "data_size": 65536 00:09:01.079 }, 00:09:01.079 { 00:09:01.079 "name": "BaseBdev2", 00:09:01.079 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:09:01.079 "is_configured": true, 00:09:01.079 "data_offset": 0, 00:09:01.079 "data_size": 65536 00:09:01.079 }, 00:09:01.079 { 00:09:01.079 "name": "BaseBdev3", 00:09:01.079 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:09:01.079 "is_configured": true, 00:09:01.079 "data_offset": 0, 00:09:01.079 "data_size": 65536 00:09:01.079 } 00:09:01.079 ] 00:09:01.079 }' 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.079 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.343 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3997abb4-e6a6-4c79-b148-ee2ac70f7f2d 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.603 [2024-12-06 18:06:13.557577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:01.603 [2024-12-06 18:06:13.557623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:01.603 [2024-12-06 18:06:13.557633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:01.603 [2024-12-06 18:06:13.557887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:01.603 [2024-12-06 18:06:13.558054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:01.603 [2024-12-06 18:06:13.558088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:01.603 [2024-12-06 18:06:13.558348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.603 NewBaseBdev 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.603 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.603 [ 00:09:01.603 { 00:09:01.603 "name": "NewBaseBdev", 00:09:01.603 "aliases": [ 00:09:01.603 "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d" 00:09:01.603 ], 00:09:01.603 "product_name": "Malloc disk", 00:09:01.603 "block_size": 512, 00:09:01.604 "num_blocks": 65536, 00:09:01.604 "uuid": "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d", 00:09:01.604 "assigned_rate_limits": { 00:09:01.604 "rw_ios_per_sec": 0, 00:09:01.604 "rw_mbytes_per_sec": 0, 00:09:01.604 "r_mbytes_per_sec": 0, 00:09:01.604 "w_mbytes_per_sec": 0 00:09:01.604 }, 00:09:01.604 "claimed": true, 00:09:01.604 "claim_type": "exclusive_write", 00:09:01.604 "zoned": false, 00:09:01.604 "supported_io_types": { 00:09:01.604 "read": true, 00:09:01.604 "write": true, 00:09:01.604 "unmap": true, 00:09:01.604 "flush": true, 00:09:01.604 "reset": true, 00:09:01.604 "nvme_admin": false, 00:09:01.604 "nvme_io": false, 00:09:01.604 "nvme_io_md": false, 00:09:01.604 "write_zeroes": true, 00:09:01.604 "zcopy": true, 00:09:01.604 "get_zone_info": false, 00:09:01.604 "zone_management": false, 00:09:01.604 "zone_append": false, 00:09:01.604 "compare": false, 00:09:01.604 "compare_and_write": false, 00:09:01.604 "abort": true, 00:09:01.604 "seek_hole": false, 00:09:01.604 "seek_data": false, 00:09:01.604 "copy": true, 00:09:01.604 "nvme_iov_md": false 00:09:01.604 }, 00:09:01.604 "memory_domains": [ 00:09:01.604 { 00:09:01.604 "dma_device_id": "system", 00:09:01.604 "dma_device_type": 1 00:09:01.604 }, 00:09:01.604 { 00:09:01.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.604 "dma_device_type": 2 00:09:01.604 } 00:09:01.604 ], 00:09:01.604 "driver_specific": {} 00:09:01.604 } 00:09:01.604 ] 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.604 "name": "Existed_Raid", 00:09:01.604 "uuid": "cc1c956b-937e-4a3c-8a54-b8aed1ff4f3f", 00:09:01.604 "strip_size_kb": 64, 00:09:01.604 "state": "online", 00:09:01.604 "raid_level": "raid0", 00:09:01.604 "superblock": false, 00:09:01.604 "num_base_bdevs": 3, 00:09:01.604 "num_base_bdevs_discovered": 3, 00:09:01.604 "num_base_bdevs_operational": 3, 00:09:01.604 "base_bdevs_list": [ 00:09:01.604 { 00:09:01.604 "name": "NewBaseBdev", 00:09:01.604 "uuid": "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d", 00:09:01.604 "is_configured": true, 00:09:01.604 "data_offset": 0, 00:09:01.604 "data_size": 65536 00:09:01.604 }, 00:09:01.604 { 00:09:01.604 "name": "BaseBdev2", 00:09:01.604 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:09:01.604 "is_configured": true, 00:09:01.604 "data_offset": 0, 00:09:01.604 "data_size": 65536 00:09:01.604 }, 00:09:01.604 { 00:09:01.604 "name": "BaseBdev3", 00:09:01.604 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:09:01.604 "is_configured": true, 00:09:01.604 "data_offset": 0, 00:09:01.604 "data_size": 65536 00:09:01.604 } 00:09:01.604 ] 00:09:01.604 }' 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.604 18:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.173 [2024-12-06 18:06:14.057149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.173 "name": "Existed_Raid", 00:09:02.173 "aliases": [ 00:09:02.173 "cc1c956b-937e-4a3c-8a54-b8aed1ff4f3f" 00:09:02.173 ], 00:09:02.173 "product_name": "Raid Volume", 00:09:02.173 "block_size": 512, 00:09:02.173 "num_blocks": 196608, 00:09:02.173 "uuid": "cc1c956b-937e-4a3c-8a54-b8aed1ff4f3f", 00:09:02.173 "assigned_rate_limits": { 00:09:02.173 "rw_ios_per_sec": 0, 00:09:02.173 "rw_mbytes_per_sec": 0, 00:09:02.173 "r_mbytes_per_sec": 0, 00:09:02.173 "w_mbytes_per_sec": 0 00:09:02.173 }, 00:09:02.173 "claimed": false, 00:09:02.173 "zoned": false, 00:09:02.173 "supported_io_types": { 00:09:02.173 "read": true, 00:09:02.173 "write": true, 00:09:02.173 "unmap": true, 00:09:02.173 "flush": true, 00:09:02.173 "reset": true, 00:09:02.173 "nvme_admin": false, 00:09:02.173 "nvme_io": false, 00:09:02.173 "nvme_io_md": false, 00:09:02.173 "write_zeroes": true, 00:09:02.173 "zcopy": false, 00:09:02.173 "get_zone_info": false, 00:09:02.173 "zone_management": false, 00:09:02.173 "zone_append": false, 00:09:02.173 "compare": false, 00:09:02.173 "compare_and_write": false, 00:09:02.173 "abort": false, 00:09:02.173 "seek_hole": false, 00:09:02.173 "seek_data": false, 00:09:02.173 "copy": false, 00:09:02.173 "nvme_iov_md": false 00:09:02.173 }, 00:09:02.173 "memory_domains": [ 00:09:02.173 { 00:09:02.173 "dma_device_id": "system", 00:09:02.173 "dma_device_type": 1 00:09:02.173 }, 00:09:02.173 { 00:09:02.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.173 "dma_device_type": 2 00:09:02.173 }, 00:09:02.173 { 00:09:02.173 "dma_device_id": "system", 00:09:02.173 "dma_device_type": 1 00:09:02.173 }, 00:09:02.173 { 00:09:02.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.173 "dma_device_type": 2 00:09:02.173 }, 00:09:02.173 { 00:09:02.173 "dma_device_id": "system", 00:09:02.173 "dma_device_type": 1 00:09:02.173 }, 00:09:02.173 { 00:09:02.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.173 "dma_device_type": 2 00:09:02.173 } 00:09:02.173 ], 00:09:02.173 "driver_specific": { 00:09:02.173 "raid": { 00:09:02.173 "uuid": "cc1c956b-937e-4a3c-8a54-b8aed1ff4f3f", 00:09:02.173 "strip_size_kb": 64, 00:09:02.173 "state": "online", 00:09:02.173 "raid_level": "raid0", 00:09:02.173 "superblock": false, 00:09:02.173 "num_base_bdevs": 3, 00:09:02.173 "num_base_bdevs_discovered": 3, 00:09:02.173 "num_base_bdevs_operational": 3, 00:09:02.173 "base_bdevs_list": [ 00:09:02.173 { 00:09:02.173 "name": "NewBaseBdev", 00:09:02.173 "uuid": "3997abb4-e6a6-4c79-b148-ee2ac70f7f2d", 00:09:02.173 "is_configured": true, 00:09:02.173 "data_offset": 0, 00:09:02.173 "data_size": 65536 00:09:02.173 }, 00:09:02.173 { 00:09:02.173 "name": "BaseBdev2", 00:09:02.173 "uuid": "817d3c5b-6f54-42f4-a138-c5912599099e", 00:09:02.173 "is_configured": true, 00:09:02.173 "data_offset": 0, 00:09:02.173 "data_size": 65536 00:09:02.173 }, 00:09:02.173 { 00:09:02.173 "name": "BaseBdev3", 00:09:02.173 "uuid": "f74be83a-a096-4132-885f-2e4d30def1ff", 00:09:02.173 "is_configured": true, 00:09:02.173 "data_offset": 0, 00:09:02.173 "data_size": 65536 00:09:02.173 } 00:09:02.173 ] 00:09:02.173 } 00:09:02.173 } 00:09:02.173 }' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:02.173 BaseBdev2 00:09:02.173 BaseBdev3' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.173 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.433 [2024-12-06 18:06:14.344330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.433 [2024-12-06 18:06:14.344365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.433 [2024-12-06 18:06:14.344461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.433 [2024-12-06 18:06:14.344522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.433 [2024-12-06 18:06:14.344535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64231 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64231 ']' 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64231 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64231 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64231' 00:09:02.433 killing process with pid 64231 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64231 00:09:02.433 [2024-12-06 18:06:14.384922] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.433 18:06:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64231 00:09:02.693 [2024-12-06 18:06:14.720534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:04.103 00:09:04.103 real 0m10.963s 00:09:04.103 user 0m17.390s 00:09:04.103 sys 0m1.888s 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.103 ************************************ 00:09:04.103 END TEST raid_state_function_test 00:09:04.103 ************************************ 00:09:04.103 18:06:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:04.103 18:06:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:04.103 18:06:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.103 18:06:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.103 ************************************ 00:09:04.103 START TEST raid_state_function_test_sb 00:09:04.103 ************************************ 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:04.103 18:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64858 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:04.103 Process raid pid: 64858 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64858' 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64858 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64858 ']' 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.103 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.103 [2024-12-06 18:06:16.091606] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:09:04.103 [2024-12-06 18:06:16.091827] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.103 [2024-12-06 18:06:16.267970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.363 [2024-12-06 18:06:16.394785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.622 [2024-12-06 18:06:16.615263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.622 [2024-12-06 18:06:16.615413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.880 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.880 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:04.880 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.880 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.880 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.880 [2024-12-06 18:06:16.963583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.880 [2024-12-06 18:06:16.963696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.880 [2024-12-06 18:06:16.963729] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.880 [2024-12-06 18:06:16.963753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.880 [2024-12-06 18:06:16.963778] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.880 [2024-12-06 18:06:16.963801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.880 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.880 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.880 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.880 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.881 18:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.881 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.881 "name": "Existed_Raid", 00:09:04.881 "uuid": "b4a5b255-3bbc-48b1-a7d3-b2cdb3ba3c66", 00:09:04.881 "strip_size_kb": 64, 00:09:04.881 "state": "configuring", 00:09:04.881 "raid_level": "raid0", 00:09:04.881 "superblock": true, 00:09:04.881 "num_base_bdevs": 3, 00:09:04.881 "num_base_bdevs_discovered": 0, 00:09:04.881 "num_base_bdevs_operational": 3, 00:09:04.881 "base_bdevs_list": [ 00:09:04.881 { 00:09:04.881 "name": "BaseBdev1", 00:09:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.881 "is_configured": false, 00:09:04.881 "data_offset": 0, 00:09:04.881 "data_size": 0 00:09:04.881 }, 00:09:04.881 { 00:09:04.881 "name": "BaseBdev2", 00:09:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.881 "is_configured": false, 00:09:04.881 "data_offset": 0, 00:09:04.881 "data_size": 0 00:09:04.881 }, 00:09:04.881 { 00:09:04.881 "name": "BaseBdev3", 00:09:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.881 "is_configured": false, 00:09:04.881 "data_offset": 0, 00:09:04.881 "data_size": 0 00:09:04.881 } 00:09:04.881 ] 00:09:04.881 }' 00:09:04.881 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.881 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.452 [2024-12-06 18:06:17.466654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.452 [2024-12-06 18:06:17.466695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.452 [2024-12-06 18:06:17.478672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.452 [2024-12-06 18:06:17.478733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.452 [2024-12-06 18:06:17.478742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.452 [2024-12-06 18:06:17.478752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.452 [2024-12-06 18:06:17.478758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.452 [2024-12-06 18:06:17.478767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.452 [2024-12-06 18:06:17.531783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.452 BaseBdev1 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.452 [ 00:09:05.452 { 00:09:05.452 "name": "BaseBdev1", 00:09:05.452 "aliases": [ 00:09:05.452 "84b0a9af-d70e-43a7-9269-ce35646942f5" 00:09:05.452 ], 00:09:05.452 "product_name": "Malloc disk", 00:09:05.452 "block_size": 512, 00:09:05.452 "num_blocks": 65536, 00:09:05.452 "uuid": "84b0a9af-d70e-43a7-9269-ce35646942f5", 00:09:05.452 "assigned_rate_limits": { 00:09:05.452 "rw_ios_per_sec": 0, 00:09:05.452 "rw_mbytes_per_sec": 0, 00:09:05.452 "r_mbytes_per_sec": 0, 00:09:05.452 "w_mbytes_per_sec": 0 00:09:05.452 }, 00:09:05.452 "claimed": true, 00:09:05.452 "claim_type": "exclusive_write", 00:09:05.452 "zoned": false, 00:09:05.452 "supported_io_types": { 00:09:05.452 "read": true, 00:09:05.452 "write": true, 00:09:05.452 "unmap": true, 00:09:05.452 "flush": true, 00:09:05.452 "reset": true, 00:09:05.452 "nvme_admin": false, 00:09:05.452 "nvme_io": false, 00:09:05.452 "nvme_io_md": false, 00:09:05.452 "write_zeroes": true, 00:09:05.452 "zcopy": true, 00:09:05.452 "get_zone_info": false, 00:09:05.452 "zone_management": false, 00:09:05.452 "zone_append": false, 00:09:05.452 "compare": false, 00:09:05.452 "compare_and_write": false, 00:09:05.452 "abort": true, 00:09:05.452 "seek_hole": false, 00:09:05.452 "seek_data": false, 00:09:05.452 "copy": true, 00:09:05.452 "nvme_iov_md": false 00:09:05.452 }, 00:09:05.452 "memory_domains": [ 00:09:05.452 { 00:09:05.452 "dma_device_id": "system", 00:09:05.452 "dma_device_type": 1 00:09:05.452 }, 00:09:05.452 { 00:09:05.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.452 "dma_device_type": 2 00:09:05.452 } 00:09:05.452 ], 00:09:05.452 "driver_specific": {} 00:09:05.452 } 00:09:05.452 ] 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.452 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.712 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.712 "name": "Existed_Raid", 00:09:05.712 "uuid": "73871155-62cc-41e1-915b-a164cd9cd11c", 00:09:05.712 "strip_size_kb": 64, 00:09:05.712 "state": "configuring", 00:09:05.712 "raid_level": "raid0", 00:09:05.712 "superblock": true, 00:09:05.712 "num_base_bdevs": 3, 00:09:05.712 "num_base_bdevs_discovered": 1, 00:09:05.712 "num_base_bdevs_operational": 3, 00:09:05.712 "base_bdevs_list": [ 00:09:05.712 { 00:09:05.712 "name": "BaseBdev1", 00:09:05.712 "uuid": "84b0a9af-d70e-43a7-9269-ce35646942f5", 00:09:05.712 "is_configured": true, 00:09:05.712 "data_offset": 2048, 00:09:05.712 "data_size": 63488 00:09:05.712 }, 00:09:05.712 { 00:09:05.712 "name": "BaseBdev2", 00:09:05.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.712 "is_configured": false, 00:09:05.712 "data_offset": 0, 00:09:05.712 "data_size": 0 00:09:05.712 }, 00:09:05.712 { 00:09:05.712 "name": "BaseBdev3", 00:09:05.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.712 "is_configured": false, 00:09:05.712 "data_offset": 0, 00:09:05.712 "data_size": 0 00:09:05.712 } 00:09:05.712 ] 00:09:05.712 }' 00:09:05.712 18:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.712 18:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.971 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.971 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.971 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.972 [2024-12-06 18:06:18.046978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.972 [2024-12-06 18:06:18.047046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.972 [2024-12-06 18:06:18.059072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.972 [2024-12-06 18:06:18.061167] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.972 [2024-12-06 18:06:18.061215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.972 [2024-12-06 18:06:18.061226] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.972 [2024-12-06 18:06:18.061235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.972 "name": "Existed_Raid", 00:09:05.972 "uuid": "8b0e2b84-6adc-4311-b20a-42d12abd7a12", 00:09:05.972 "strip_size_kb": 64, 00:09:05.972 "state": "configuring", 00:09:05.972 "raid_level": "raid0", 00:09:05.972 "superblock": true, 00:09:05.972 "num_base_bdevs": 3, 00:09:05.972 "num_base_bdevs_discovered": 1, 00:09:05.972 "num_base_bdevs_operational": 3, 00:09:05.972 "base_bdevs_list": [ 00:09:05.972 { 00:09:05.972 "name": "BaseBdev1", 00:09:05.972 "uuid": "84b0a9af-d70e-43a7-9269-ce35646942f5", 00:09:05.972 "is_configured": true, 00:09:05.972 "data_offset": 2048, 00:09:05.972 "data_size": 63488 00:09:05.972 }, 00:09:05.972 { 00:09:05.972 "name": "BaseBdev2", 00:09:05.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.972 "is_configured": false, 00:09:05.972 "data_offset": 0, 00:09:05.972 "data_size": 0 00:09:05.972 }, 00:09:05.972 { 00:09:05.972 "name": "BaseBdev3", 00:09:05.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.972 "is_configured": false, 00:09:05.972 "data_offset": 0, 00:09:05.972 "data_size": 0 00:09:05.972 } 00:09:05.972 ] 00:09:05.972 }' 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.972 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.539 [2024-12-06 18:06:18.538949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.539 BaseBdev2 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.539 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.539 [ 00:09:06.539 { 00:09:06.539 "name": "BaseBdev2", 00:09:06.539 "aliases": [ 00:09:06.540 "f689c84a-a115-4d6f-b8b3-f384be8b0504" 00:09:06.540 ], 00:09:06.540 "product_name": "Malloc disk", 00:09:06.540 "block_size": 512, 00:09:06.540 "num_blocks": 65536, 00:09:06.540 "uuid": "f689c84a-a115-4d6f-b8b3-f384be8b0504", 00:09:06.540 "assigned_rate_limits": { 00:09:06.540 "rw_ios_per_sec": 0, 00:09:06.540 "rw_mbytes_per_sec": 0, 00:09:06.540 "r_mbytes_per_sec": 0, 00:09:06.540 "w_mbytes_per_sec": 0 00:09:06.540 }, 00:09:06.540 "claimed": true, 00:09:06.540 "claim_type": "exclusive_write", 00:09:06.540 "zoned": false, 00:09:06.540 "supported_io_types": { 00:09:06.540 "read": true, 00:09:06.540 "write": true, 00:09:06.540 "unmap": true, 00:09:06.540 "flush": true, 00:09:06.540 "reset": true, 00:09:06.540 "nvme_admin": false, 00:09:06.540 "nvme_io": false, 00:09:06.540 "nvme_io_md": false, 00:09:06.540 "write_zeroes": true, 00:09:06.540 "zcopy": true, 00:09:06.540 "get_zone_info": false, 00:09:06.540 "zone_management": false, 00:09:06.540 "zone_append": false, 00:09:06.540 "compare": false, 00:09:06.540 "compare_and_write": false, 00:09:06.540 "abort": true, 00:09:06.540 "seek_hole": false, 00:09:06.540 "seek_data": false, 00:09:06.540 "copy": true, 00:09:06.540 "nvme_iov_md": false 00:09:06.540 }, 00:09:06.540 "memory_domains": [ 00:09:06.540 { 00:09:06.540 "dma_device_id": "system", 00:09:06.540 "dma_device_type": 1 00:09:06.540 }, 00:09:06.540 { 00:09:06.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.540 "dma_device_type": 2 00:09:06.540 } 00:09:06.540 ], 00:09:06.540 "driver_specific": {} 00:09:06.540 } 00:09:06.540 ] 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.540 "name": "Existed_Raid", 00:09:06.540 "uuid": "8b0e2b84-6adc-4311-b20a-42d12abd7a12", 00:09:06.540 "strip_size_kb": 64, 00:09:06.540 "state": "configuring", 00:09:06.540 "raid_level": "raid0", 00:09:06.540 "superblock": true, 00:09:06.540 "num_base_bdevs": 3, 00:09:06.540 "num_base_bdevs_discovered": 2, 00:09:06.540 "num_base_bdevs_operational": 3, 00:09:06.540 "base_bdevs_list": [ 00:09:06.540 { 00:09:06.540 "name": "BaseBdev1", 00:09:06.540 "uuid": "84b0a9af-d70e-43a7-9269-ce35646942f5", 00:09:06.540 "is_configured": true, 00:09:06.540 "data_offset": 2048, 00:09:06.540 "data_size": 63488 00:09:06.540 }, 00:09:06.540 { 00:09:06.540 "name": "BaseBdev2", 00:09:06.540 "uuid": "f689c84a-a115-4d6f-b8b3-f384be8b0504", 00:09:06.540 "is_configured": true, 00:09:06.540 "data_offset": 2048, 00:09:06.540 "data_size": 63488 00:09:06.540 }, 00:09:06.540 { 00:09:06.540 "name": "BaseBdev3", 00:09:06.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.540 "is_configured": false, 00:09:06.540 "data_offset": 0, 00:09:06.540 "data_size": 0 00:09:06.540 } 00:09:06.540 ] 00:09:06.540 }' 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.540 18:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.108 [2024-12-06 18:06:19.073214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.108 [2024-12-06 18:06:19.073509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:07.108 [2024-12-06 18:06:19.073533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.108 [2024-12-06 18:06:19.073821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:07.108 [2024-12-06 18:06:19.074004] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:07.108 [2024-12-06 18:06:19.074016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:07.108 BaseBdev3 00:09:07.108 [2024-12-06 18:06:19.074201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.108 [ 00:09:07.108 { 00:09:07.108 "name": "BaseBdev3", 00:09:07.108 "aliases": [ 00:09:07.108 "753a207d-e8ff-493a-84f7-c9cc0c2a940f" 00:09:07.108 ], 00:09:07.108 "product_name": "Malloc disk", 00:09:07.108 "block_size": 512, 00:09:07.108 "num_blocks": 65536, 00:09:07.108 "uuid": "753a207d-e8ff-493a-84f7-c9cc0c2a940f", 00:09:07.108 "assigned_rate_limits": { 00:09:07.108 "rw_ios_per_sec": 0, 00:09:07.108 "rw_mbytes_per_sec": 0, 00:09:07.108 "r_mbytes_per_sec": 0, 00:09:07.108 "w_mbytes_per_sec": 0 00:09:07.108 }, 00:09:07.108 "claimed": true, 00:09:07.108 "claim_type": "exclusive_write", 00:09:07.108 "zoned": false, 00:09:07.108 "supported_io_types": { 00:09:07.108 "read": true, 00:09:07.108 "write": true, 00:09:07.108 "unmap": true, 00:09:07.108 "flush": true, 00:09:07.108 "reset": true, 00:09:07.108 "nvme_admin": false, 00:09:07.108 "nvme_io": false, 00:09:07.108 "nvme_io_md": false, 00:09:07.108 "write_zeroes": true, 00:09:07.108 "zcopy": true, 00:09:07.108 "get_zone_info": false, 00:09:07.108 "zone_management": false, 00:09:07.108 "zone_append": false, 00:09:07.108 "compare": false, 00:09:07.108 "compare_and_write": false, 00:09:07.108 "abort": true, 00:09:07.108 "seek_hole": false, 00:09:07.108 "seek_data": false, 00:09:07.108 "copy": true, 00:09:07.108 "nvme_iov_md": false 00:09:07.108 }, 00:09:07.108 "memory_domains": [ 00:09:07.108 { 00:09:07.108 "dma_device_id": "system", 00:09:07.108 "dma_device_type": 1 00:09:07.108 }, 00:09:07.108 { 00:09:07.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.108 "dma_device_type": 2 00:09:07.108 } 00:09:07.108 ], 00:09:07.108 "driver_specific": {} 00:09:07.108 } 00:09:07.108 ] 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.108 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.109 "name": "Existed_Raid", 00:09:07.109 "uuid": "8b0e2b84-6adc-4311-b20a-42d12abd7a12", 00:09:07.109 "strip_size_kb": 64, 00:09:07.109 "state": "online", 00:09:07.109 "raid_level": "raid0", 00:09:07.109 "superblock": true, 00:09:07.109 "num_base_bdevs": 3, 00:09:07.109 "num_base_bdevs_discovered": 3, 00:09:07.109 "num_base_bdevs_operational": 3, 00:09:07.109 "base_bdevs_list": [ 00:09:07.109 { 00:09:07.109 "name": "BaseBdev1", 00:09:07.109 "uuid": "84b0a9af-d70e-43a7-9269-ce35646942f5", 00:09:07.109 "is_configured": true, 00:09:07.109 "data_offset": 2048, 00:09:07.109 "data_size": 63488 00:09:07.109 }, 00:09:07.109 { 00:09:07.109 "name": "BaseBdev2", 00:09:07.109 "uuid": "f689c84a-a115-4d6f-b8b3-f384be8b0504", 00:09:07.109 "is_configured": true, 00:09:07.109 "data_offset": 2048, 00:09:07.109 "data_size": 63488 00:09:07.109 }, 00:09:07.109 { 00:09:07.109 "name": "BaseBdev3", 00:09:07.109 "uuid": "753a207d-e8ff-493a-84f7-c9cc0c2a940f", 00:09:07.109 "is_configured": true, 00:09:07.109 "data_offset": 2048, 00:09:07.109 "data_size": 63488 00:09:07.109 } 00:09:07.109 ] 00:09:07.109 }' 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.109 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.695 [2024-12-06 18:06:19.552820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.695 "name": "Existed_Raid", 00:09:07.695 "aliases": [ 00:09:07.695 "8b0e2b84-6adc-4311-b20a-42d12abd7a12" 00:09:07.695 ], 00:09:07.695 "product_name": "Raid Volume", 00:09:07.695 "block_size": 512, 00:09:07.695 "num_blocks": 190464, 00:09:07.695 "uuid": "8b0e2b84-6adc-4311-b20a-42d12abd7a12", 00:09:07.695 "assigned_rate_limits": { 00:09:07.695 "rw_ios_per_sec": 0, 00:09:07.695 "rw_mbytes_per_sec": 0, 00:09:07.695 "r_mbytes_per_sec": 0, 00:09:07.695 "w_mbytes_per_sec": 0 00:09:07.695 }, 00:09:07.695 "claimed": false, 00:09:07.695 "zoned": false, 00:09:07.695 "supported_io_types": { 00:09:07.695 "read": true, 00:09:07.695 "write": true, 00:09:07.695 "unmap": true, 00:09:07.695 "flush": true, 00:09:07.695 "reset": true, 00:09:07.695 "nvme_admin": false, 00:09:07.695 "nvme_io": false, 00:09:07.695 "nvme_io_md": false, 00:09:07.695 "write_zeroes": true, 00:09:07.695 "zcopy": false, 00:09:07.695 "get_zone_info": false, 00:09:07.695 "zone_management": false, 00:09:07.695 "zone_append": false, 00:09:07.695 "compare": false, 00:09:07.695 "compare_and_write": false, 00:09:07.695 "abort": false, 00:09:07.695 "seek_hole": false, 00:09:07.695 "seek_data": false, 00:09:07.695 "copy": false, 00:09:07.695 "nvme_iov_md": false 00:09:07.695 }, 00:09:07.695 "memory_domains": [ 00:09:07.695 { 00:09:07.695 "dma_device_id": "system", 00:09:07.695 "dma_device_type": 1 00:09:07.695 }, 00:09:07.695 { 00:09:07.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.695 "dma_device_type": 2 00:09:07.695 }, 00:09:07.695 { 00:09:07.695 "dma_device_id": "system", 00:09:07.695 "dma_device_type": 1 00:09:07.695 }, 00:09:07.695 { 00:09:07.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.695 "dma_device_type": 2 00:09:07.695 }, 00:09:07.695 { 00:09:07.695 "dma_device_id": "system", 00:09:07.695 "dma_device_type": 1 00:09:07.695 }, 00:09:07.695 { 00:09:07.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.695 "dma_device_type": 2 00:09:07.695 } 00:09:07.695 ], 00:09:07.695 "driver_specific": { 00:09:07.695 "raid": { 00:09:07.695 "uuid": "8b0e2b84-6adc-4311-b20a-42d12abd7a12", 00:09:07.695 "strip_size_kb": 64, 00:09:07.695 "state": "online", 00:09:07.695 "raid_level": "raid0", 00:09:07.695 "superblock": true, 00:09:07.695 "num_base_bdevs": 3, 00:09:07.695 "num_base_bdevs_discovered": 3, 00:09:07.695 "num_base_bdevs_operational": 3, 00:09:07.695 "base_bdevs_list": [ 00:09:07.695 { 00:09:07.695 "name": "BaseBdev1", 00:09:07.695 "uuid": "84b0a9af-d70e-43a7-9269-ce35646942f5", 00:09:07.695 "is_configured": true, 00:09:07.695 "data_offset": 2048, 00:09:07.695 "data_size": 63488 00:09:07.695 }, 00:09:07.695 { 00:09:07.695 "name": "BaseBdev2", 00:09:07.695 "uuid": "f689c84a-a115-4d6f-b8b3-f384be8b0504", 00:09:07.695 "is_configured": true, 00:09:07.695 "data_offset": 2048, 00:09:07.695 "data_size": 63488 00:09:07.695 }, 00:09:07.695 { 00:09:07.695 "name": "BaseBdev3", 00:09:07.695 "uuid": "753a207d-e8ff-493a-84f7-c9cc0c2a940f", 00:09:07.695 "is_configured": true, 00:09:07.695 "data_offset": 2048, 00:09:07.695 "data_size": 63488 00:09:07.695 } 00:09:07.695 ] 00:09:07.695 } 00:09:07.695 } 00:09:07.695 }' 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:07.695 BaseBdev2 00:09:07.695 BaseBdev3' 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.695 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.696 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.696 [2024-12-06 18:06:19.844092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.696 [2024-12-06 18:06:19.844125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.696 [2024-12-06 18:06:19.844191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.956 18:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.956 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.956 "name": "Existed_Raid", 00:09:07.956 "uuid": "8b0e2b84-6adc-4311-b20a-42d12abd7a12", 00:09:07.956 "strip_size_kb": 64, 00:09:07.956 "state": "offline", 00:09:07.956 "raid_level": "raid0", 00:09:07.956 "superblock": true, 00:09:07.956 "num_base_bdevs": 3, 00:09:07.956 "num_base_bdevs_discovered": 2, 00:09:07.956 "num_base_bdevs_operational": 2, 00:09:07.956 "base_bdevs_list": [ 00:09:07.956 { 00:09:07.956 "name": null, 00:09:07.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.956 "is_configured": false, 00:09:07.956 "data_offset": 0, 00:09:07.956 "data_size": 63488 00:09:07.956 }, 00:09:07.956 { 00:09:07.956 "name": "BaseBdev2", 00:09:07.956 "uuid": "f689c84a-a115-4d6f-b8b3-f384be8b0504", 00:09:07.956 "is_configured": true, 00:09:07.956 "data_offset": 2048, 00:09:07.956 "data_size": 63488 00:09:07.956 }, 00:09:07.956 { 00:09:07.956 "name": "BaseBdev3", 00:09:07.956 "uuid": "753a207d-e8ff-493a-84f7-c9cc0c2a940f", 00:09:07.956 "is_configured": true, 00:09:07.956 "data_offset": 2048, 00:09:07.956 "data_size": 63488 00:09:07.956 } 00:09:07.956 ] 00:09:07.956 }' 00:09:07.956 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.956 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.525 [2024-12-06 18:06:20.447565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.525 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.526 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.526 [2024-12-06 18:06:20.598521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.526 [2024-12-06 18:06:20.598644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:08.784 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.784 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.784 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.784 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.784 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.785 BaseBdev2 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.785 [ 00:09:08.785 { 00:09:08.785 "name": "BaseBdev2", 00:09:08.785 "aliases": [ 00:09:08.785 "13d1b071-12ab-4b12-b748-9cd856e16d00" 00:09:08.785 ], 00:09:08.785 "product_name": "Malloc disk", 00:09:08.785 "block_size": 512, 00:09:08.785 "num_blocks": 65536, 00:09:08.785 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:08.785 "assigned_rate_limits": { 00:09:08.785 "rw_ios_per_sec": 0, 00:09:08.785 "rw_mbytes_per_sec": 0, 00:09:08.785 "r_mbytes_per_sec": 0, 00:09:08.785 "w_mbytes_per_sec": 0 00:09:08.785 }, 00:09:08.785 "claimed": false, 00:09:08.785 "zoned": false, 00:09:08.785 "supported_io_types": { 00:09:08.785 "read": true, 00:09:08.785 "write": true, 00:09:08.785 "unmap": true, 00:09:08.785 "flush": true, 00:09:08.785 "reset": true, 00:09:08.785 "nvme_admin": false, 00:09:08.785 "nvme_io": false, 00:09:08.785 "nvme_io_md": false, 00:09:08.785 "write_zeroes": true, 00:09:08.785 "zcopy": true, 00:09:08.785 "get_zone_info": false, 00:09:08.785 "zone_management": false, 00:09:08.785 "zone_append": false, 00:09:08.785 "compare": false, 00:09:08.785 "compare_and_write": false, 00:09:08.785 "abort": true, 00:09:08.785 "seek_hole": false, 00:09:08.785 "seek_data": false, 00:09:08.785 "copy": true, 00:09:08.785 "nvme_iov_md": false 00:09:08.785 }, 00:09:08.785 "memory_domains": [ 00:09:08.785 { 00:09:08.785 "dma_device_id": "system", 00:09:08.785 "dma_device_type": 1 00:09:08.785 }, 00:09:08.785 { 00:09:08.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.785 "dma_device_type": 2 00:09:08.785 } 00:09:08.785 ], 00:09:08.785 "driver_specific": {} 00:09:08.785 } 00:09:08.785 ] 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.785 BaseBdev3 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.785 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.785 [ 00:09:08.785 { 00:09:08.785 "name": "BaseBdev3", 00:09:08.785 "aliases": [ 00:09:08.786 "5c93db7c-2256-4947-a472-325e5a6721a7" 00:09:08.786 ], 00:09:08.786 "product_name": "Malloc disk", 00:09:08.786 "block_size": 512, 00:09:08.786 "num_blocks": 65536, 00:09:08.786 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:08.786 "assigned_rate_limits": { 00:09:08.786 "rw_ios_per_sec": 0, 00:09:08.786 "rw_mbytes_per_sec": 0, 00:09:08.786 "r_mbytes_per_sec": 0, 00:09:08.786 "w_mbytes_per_sec": 0 00:09:08.786 }, 00:09:08.786 "claimed": false, 00:09:08.786 "zoned": false, 00:09:08.786 "supported_io_types": { 00:09:08.786 "read": true, 00:09:08.786 "write": true, 00:09:08.786 "unmap": true, 00:09:08.786 "flush": true, 00:09:08.786 "reset": true, 00:09:08.786 "nvme_admin": false, 00:09:08.786 "nvme_io": false, 00:09:08.786 "nvme_io_md": false, 00:09:08.786 "write_zeroes": true, 00:09:08.786 "zcopy": true, 00:09:08.786 "get_zone_info": false, 00:09:08.786 "zone_management": false, 00:09:08.786 "zone_append": false, 00:09:08.786 "compare": false, 00:09:08.786 "compare_and_write": false, 00:09:08.786 "abort": true, 00:09:08.786 "seek_hole": false, 00:09:08.786 "seek_data": false, 00:09:08.786 "copy": true, 00:09:08.786 "nvme_iov_md": false 00:09:08.786 }, 00:09:08.786 "memory_domains": [ 00:09:08.786 { 00:09:08.786 "dma_device_id": "system", 00:09:08.786 "dma_device_type": 1 00:09:08.786 }, 00:09:08.786 { 00:09:08.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.786 "dma_device_type": 2 00:09:08.786 } 00:09:08.786 ], 00:09:08.786 "driver_specific": {} 00:09:08.786 } 00:09:08.786 ] 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 [2024-12-06 18:06:20.925337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.786 [2024-12-06 18:06:20.925436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.786 [2024-12-06 18:06:20.925484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.786 [2024-12-06 18:06:20.927329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.786 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.044 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.044 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.044 "name": "Existed_Raid", 00:09:09.044 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:09.044 "strip_size_kb": 64, 00:09:09.044 "state": "configuring", 00:09:09.044 "raid_level": "raid0", 00:09:09.044 "superblock": true, 00:09:09.044 "num_base_bdevs": 3, 00:09:09.044 "num_base_bdevs_discovered": 2, 00:09:09.044 "num_base_bdevs_operational": 3, 00:09:09.044 "base_bdevs_list": [ 00:09:09.044 { 00:09:09.044 "name": "BaseBdev1", 00:09:09.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.044 "is_configured": false, 00:09:09.044 "data_offset": 0, 00:09:09.044 "data_size": 0 00:09:09.044 }, 00:09:09.044 { 00:09:09.044 "name": "BaseBdev2", 00:09:09.045 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:09.045 "is_configured": true, 00:09:09.045 "data_offset": 2048, 00:09:09.045 "data_size": 63488 00:09:09.045 }, 00:09:09.045 { 00:09:09.045 "name": "BaseBdev3", 00:09:09.045 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:09.045 "is_configured": true, 00:09:09.045 "data_offset": 2048, 00:09:09.045 "data_size": 63488 00:09:09.045 } 00:09:09.045 ] 00:09:09.045 }' 00:09:09.045 18:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.045 18:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.303 [2024-12-06 18:06:21.408549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.303 "name": "Existed_Raid", 00:09:09.303 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:09.303 "strip_size_kb": 64, 00:09:09.303 "state": "configuring", 00:09:09.303 "raid_level": "raid0", 00:09:09.303 "superblock": true, 00:09:09.303 "num_base_bdevs": 3, 00:09:09.303 "num_base_bdevs_discovered": 1, 00:09:09.303 "num_base_bdevs_operational": 3, 00:09:09.303 "base_bdevs_list": [ 00:09:09.303 { 00:09:09.303 "name": "BaseBdev1", 00:09:09.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.303 "is_configured": false, 00:09:09.303 "data_offset": 0, 00:09:09.303 "data_size": 0 00:09:09.303 }, 00:09:09.303 { 00:09:09.303 "name": null, 00:09:09.303 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:09.303 "is_configured": false, 00:09:09.303 "data_offset": 0, 00:09:09.303 "data_size": 63488 00:09:09.303 }, 00:09:09.303 { 00:09:09.303 "name": "BaseBdev3", 00:09:09.303 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:09.303 "is_configured": true, 00:09:09.303 "data_offset": 2048, 00:09:09.303 "data_size": 63488 00:09:09.303 } 00:09:09.303 ] 00:09:09.303 }' 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.303 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.926 [2024-12-06 18:06:21.920253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.926 BaseBdev1 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.926 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.926 [ 00:09:09.926 { 00:09:09.926 "name": "BaseBdev1", 00:09:09.926 "aliases": [ 00:09:09.926 "29b37a85-d5cd-4552-aa81-e9223e820bc2" 00:09:09.926 ], 00:09:09.926 "product_name": "Malloc disk", 00:09:09.926 "block_size": 512, 00:09:09.926 "num_blocks": 65536, 00:09:09.926 "uuid": "29b37a85-d5cd-4552-aa81-e9223e820bc2", 00:09:09.926 "assigned_rate_limits": { 00:09:09.926 "rw_ios_per_sec": 0, 00:09:09.926 "rw_mbytes_per_sec": 0, 00:09:09.926 "r_mbytes_per_sec": 0, 00:09:09.926 "w_mbytes_per_sec": 0 00:09:09.926 }, 00:09:09.926 "claimed": true, 00:09:09.926 "claim_type": "exclusive_write", 00:09:09.926 "zoned": false, 00:09:09.926 "supported_io_types": { 00:09:09.926 "read": true, 00:09:09.927 "write": true, 00:09:09.927 "unmap": true, 00:09:09.927 "flush": true, 00:09:09.927 "reset": true, 00:09:09.927 "nvme_admin": false, 00:09:09.927 "nvme_io": false, 00:09:09.927 "nvme_io_md": false, 00:09:09.927 "write_zeroes": true, 00:09:09.927 "zcopy": true, 00:09:09.927 "get_zone_info": false, 00:09:09.927 "zone_management": false, 00:09:09.927 "zone_append": false, 00:09:09.927 "compare": false, 00:09:09.927 "compare_and_write": false, 00:09:09.927 "abort": true, 00:09:09.927 "seek_hole": false, 00:09:09.927 "seek_data": false, 00:09:09.927 "copy": true, 00:09:09.927 "nvme_iov_md": false 00:09:09.927 }, 00:09:09.927 "memory_domains": [ 00:09:09.927 { 00:09:09.927 "dma_device_id": "system", 00:09:09.927 "dma_device_type": 1 00:09:09.927 }, 00:09:09.927 { 00:09:09.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.927 "dma_device_type": 2 00:09:09.927 } 00:09:09.927 ], 00:09:09.927 "driver_specific": {} 00:09:09.927 } 00:09:09.927 ] 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.927 18:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.927 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.927 "name": "Existed_Raid", 00:09:09.927 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:09.927 "strip_size_kb": 64, 00:09:09.927 "state": "configuring", 00:09:09.927 "raid_level": "raid0", 00:09:09.927 "superblock": true, 00:09:09.927 "num_base_bdevs": 3, 00:09:09.927 "num_base_bdevs_discovered": 2, 00:09:09.927 "num_base_bdevs_operational": 3, 00:09:09.927 "base_bdevs_list": [ 00:09:09.927 { 00:09:09.927 "name": "BaseBdev1", 00:09:09.927 "uuid": "29b37a85-d5cd-4552-aa81-e9223e820bc2", 00:09:09.927 "is_configured": true, 00:09:09.927 "data_offset": 2048, 00:09:09.927 "data_size": 63488 00:09:09.927 }, 00:09:09.927 { 00:09:09.927 "name": null, 00:09:09.927 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:09.927 "is_configured": false, 00:09:09.927 "data_offset": 0, 00:09:09.927 "data_size": 63488 00:09:09.927 }, 00:09:09.927 { 00:09:09.927 "name": "BaseBdev3", 00:09:09.927 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:09.927 "is_configured": true, 00:09:09.927 "data_offset": 2048, 00:09:09.927 "data_size": 63488 00:09:09.927 } 00:09:09.927 ] 00:09:09.927 }' 00:09:09.927 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.927 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.500 [2024-12-06 18:06:22.475470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.500 "name": "Existed_Raid", 00:09:10.500 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:10.500 "strip_size_kb": 64, 00:09:10.500 "state": "configuring", 00:09:10.500 "raid_level": "raid0", 00:09:10.500 "superblock": true, 00:09:10.500 "num_base_bdevs": 3, 00:09:10.500 "num_base_bdevs_discovered": 1, 00:09:10.500 "num_base_bdevs_operational": 3, 00:09:10.500 "base_bdevs_list": [ 00:09:10.500 { 00:09:10.500 "name": "BaseBdev1", 00:09:10.500 "uuid": "29b37a85-d5cd-4552-aa81-e9223e820bc2", 00:09:10.500 "is_configured": true, 00:09:10.500 "data_offset": 2048, 00:09:10.500 "data_size": 63488 00:09:10.500 }, 00:09:10.500 { 00:09:10.500 "name": null, 00:09:10.500 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:10.500 "is_configured": false, 00:09:10.500 "data_offset": 0, 00:09:10.500 "data_size": 63488 00:09:10.500 }, 00:09:10.500 { 00:09:10.500 "name": null, 00:09:10.500 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:10.500 "is_configured": false, 00:09:10.500 "data_offset": 0, 00:09:10.500 "data_size": 63488 00:09:10.500 } 00:09:10.500 ] 00:09:10.500 }' 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.500 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.069 [2024-12-06 18:06:22.994641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.069 18:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.069 "name": "Existed_Raid", 00:09:11.069 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:11.069 "strip_size_kb": 64, 00:09:11.069 "state": "configuring", 00:09:11.069 "raid_level": "raid0", 00:09:11.069 "superblock": true, 00:09:11.069 "num_base_bdevs": 3, 00:09:11.069 "num_base_bdevs_discovered": 2, 00:09:11.069 "num_base_bdevs_operational": 3, 00:09:11.069 "base_bdevs_list": [ 00:09:11.069 { 00:09:11.069 "name": "BaseBdev1", 00:09:11.069 "uuid": "29b37a85-d5cd-4552-aa81-e9223e820bc2", 00:09:11.069 "is_configured": true, 00:09:11.069 "data_offset": 2048, 00:09:11.069 "data_size": 63488 00:09:11.069 }, 00:09:11.069 { 00:09:11.069 "name": null, 00:09:11.069 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:11.069 "is_configured": false, 00:09:11.069 "data_offset": 0, 00:09:11.069 "data_size": 63488 00:09:11.069 }, 00:09:11.069 { 00:09:11.069 "name": "BaseBdev3", 00:09:11.069 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:11.069 "is_configured": true, 00:09:11.069 "data_offset": 2048, 00:09:11.069 "data_size": 63488 00:09:11.069 } 00:09:11.069 ] 00:09:11.069 }' 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.069 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.328 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.328 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.328 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.586 [2024-12-06 18:06:23.545706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.586 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.587 "name": "Existed_Raid", 00:09:11.587 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:11.587 "strip_size_kb": 64, 00:09:11.587 "state": "configuring", 00:09:11.587 "raid_level": "raid0", 00:09:11.587 "superblock": true, 00:09:11.587 "num_base_bdevs": 3, 00:09:11.587 "num_base_bdevs_discovered": 1, 00:09:11.587 "num_base_bdevs_operational": 3, 00:09:11.587 "base_bdevs_list": [ 00:09:11.587 { 00:09:11.587 "name": null, 00:09:11.587 "uuid": "29b37a85-d5cd-4552-aa81-e9223e820bc2", 00:09:11.587 "is_configured": false, 00:09:11.587 "data_offset": 0, 00:09:11.587 "data_size": 63488 00:09:11.587 }, 00:09:11.587 { 00:09:11.587 "name": null, 00:09:11.587 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:11.587 "is_configured": false, 00:09:11.587 "data_offset": 0, 00:09:11.587 "data_size": 63488 00:09:11.587 }, 00:09:11.587 { 00:09:11.587 "name": "BaseBdev3", 00:09:11.587 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:11.587 "is_configured": true, 00:09:11.587 "data_offset": 2048, 00:09:11.587 "data_size": 63488 00:09:11.587 } 00:09:11.587 ] 00:09:11.587 }' 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.587 18:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.153 [2024-12-06 18:06:24.155161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.153 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.154 "name": "Existed_Raid", 00:09:12.154 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:12.154 "strip_size_kb": 64, 00:09:12.154 "state": "configuring", 00:09:12.154 "raid_level": "raid0", 00:09:12.154 "superblock": true, 00:09:12.154 "num_base_bdevs": 3, 00:09:12.154 "num_base_bdevs_discovered": 2, 00:09:12.154 "num_base_bdevs_operational": 3, 00:09:12.154 "base_bdevs_list": [ 00:09:12.154 { 00:09:12.154 "name": null, 00:09:12.154 "uuid": "29b37a85-d5cd-4552-aa81-e9223e820bc2", 00:09:12.154 "is_configured": false, 00:09:12.154 "data_offset": 0, 00:09:12.154 "data_size": 63488 00:09:12.154 }, 00:09:12.154 { 00:09:12.154 "name": "BaseBdev2", 00:09:12.154 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:12.154 "is_configured": true, 00:09:12.154 "data_offset": 2048, 00:09:12.154 "data_size": 63488 00:09:12.154 }, 00:09:12.154 { 00:09:12.154 "name": "BaseBdev3", 00:09:12.154 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:12.154 "is_configured": true, 00:09:12.154 "data_offset": 2048, 00:09:12.154 "data_size": 63488 00:09:12.154 } 00:09:12.154 ] 00:09:12.154 }' 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.154 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 29b37a85-d5cd-4552-aa81-e9223e820bc2 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.722 [2024-12-06 18:06:24.753639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:12.722 [2024-12-06 18:06:24.753899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:12.722 [2024-12-06 18:06:24.753917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.722 [2024-12-06 18:06:24.754222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.722 NewBaseBdev 00:09:12.722 [2024-12-06 18:06:24.754404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:12.722 [2024-12-06 18:06:24.754424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:12.722 [2024-12-06 18:06:24.754582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.722 [ 00:09:12.722 { 00:09:12.722 "name": "NewBaseBdev", 00:09:12.722 "aliases": [ 00:09:12.722 "29b37a85-d5cd-4552-aa81-e9223e820bc2" 00:09:12.722 ], 00:09:12.722 "product_name": "Malloc disk", 00:09:12.722 "block_size": 512, 00:09:12.722 "num_blocks": 65536, 00:09:12.722 "uuid": "29b37a85-d5cd-4552-aa81-e9223e820bc2", 00:09:12.722 "assigned_rate_limits": { 00:09:12.722 "rw_ios_per_sec": 0, 00:09:12.722 "rw_mbytes_per_sec": 0, 00:09:12.722 "r_mbytes_per_sec": 0, 00:09:12.722 "w_mbytes_per_sec": 0 00:09:12.722 }, 00:09:12.722 "claimed": true, 00:09:12.722 "claim_type": "exclusive_write", 00:09:12.722 "zoned": false, 00:09:12.722 "supported_io_types": { 00:09:12.722 "read": true, 00:09:12.722 "write": true, 00:09:12.722 "unmap": true, 00:09:12.722 "flush": true, 00:09:12.722 "reset": true, 00:09:12.722 "nvme_admin": false, 00:09:12.722 "nvme_io": false, 00:09:12.722 "nvme_io_md": false, 00:09:12.722 "write_zeroes": true, 00:09:12.722 "zcopy": true, 00:09:12.722 "get_zone_info": false, 00:09:12.722 "zone_management": false, 00:09:12.722 "zone_append": false, 00:09:12.722 "compare": false, 00:09:12.722 "compare_and_write": false, 00:09:12.722 "abort": true, 00:09:12.722 "seek_hole": false, 00:09:12.722 "seek_data": false, 00:09:12.722 "copy": true, 00:09:12.722 "nvme_iov_md": false 00:09:12.722 }, 00:09:12.722 "memory_domains": [ 00:09:12.722 { 00:09:12.722 "dma_device_id": "system", 00:09:12.722 "dma_device_type": 1 00:09:12.722 }, 00:09:12.722 { 00:09:12.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.722 "dma_device_type": 2 00:09:12.722 } 00:09:12.722 ], 00:09:12.722 "driver_specific": {} 00:09:12.722 } 00:09:12.722 ] 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.722 "name": "Existed_Raid", 00:09:12.722 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:12.722 "strip_size_kb": 64, 00:09:12.722 "state": "online", 00:09:12.722 "raid_level": "raid0", 00:09:12.722 "superblock": true, 00:09:12.722 "num_base_bdevs": 3, 00:09:12.722 "num_base_bdevs_discovered": 3, 00:09:12.722 "num_base_bdevs_operational": 3, 00:09:12.722 "base_bdevs_list": [ 00:09:12.722 { 00:09:12.722 "name": "NewBaseBdev", 00:09:12.722 "uuid": "29b37a85-d5cd-4552-aa81-e9223e820bc2", 00:09:12.722 "is_configured": true, 00:09:12.722 "data_offset": 2048, 00:09:12.722 "data_size": 63488 00:09:12.722 }, 00:09:12.722 { 00:09:12.722 "name": "BaseBdev2", 00:09:12.722 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:12.722 "is_configured": true, 00:09:12.722 "data_offset": 2048, 00:09:12.722 "data_size": 63488 00:09:12.722 }, 00:09:12.722 { 00:09:12.722 "name": "BaseBdev3", 00:09:12.722 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:12.722 "is_configured": true, 00:09:12.722 "data_offset": 2048, 00:09:12.722 "data_size": 63488 00:09:12.722 } 00:09:12.722 ] 00:09:12.722 }' 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.722 18:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.289 [2024-12-06 18:06:25.229324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.289 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.289 "name": "Existed_Raid", 00:09:13.289 "aliases": [ 00:09:13.289 "2e4816d8-45be-48d8-aa7c-4c9edab1757a" 00:09:13.289 ], 00:09:13.289 "product_name": "Raid Volume", 00:09:13.289 "block_size": 512, 00:09:13.289 "num_blocks": 190464, 00:09:13.289 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:13.289 "assigned_rate_limits": { 00:09:13.289 "rw_ios_per_sec": 0, 00:09:13.289 "rw_mbytes_per_sec": 0, 00:09:13.289 "r_mbytes_per_sec": 0, 00:09:13.289 "w_mbytes_per_sec": 0 00:09:13.289 }, 00:09:13.289 "claimed": false, 00:09:13.289 "zoned": false, 00:09:13.289 "supported_io_types": { 00:09:13.289 "read": true, 00:09:13.289 "write": true, 00:09:13.289 "unmap": true, 00:09:13.289 "flush": true, 00:09:13.289 "reset": true, 00:09:13.289 "nvme_admin": false, 00:09:13.289 "nvme_io": false, 00:09:13.289 "nvme_io_md": false, 00:09:13.289 "write_zeroes": true, 00:09:13.289 "zcopy": false, 00:09:13.289 "get_zone_info": false, 00:09:13.290 "zone_management": false, 00:09:13.290 "zone_append": false, 00:09:13.290 "compare": false, 00:09:13.290 "compare_and_write": false, 00:09:13.290 "abort": false, 00:09:13.290 "seek_hole": false, 00:09:13.290 "seek_data": false, 00:09:13.290 "copy": false, 00:09:13.290 "nvme_iov_md": false 00:09:13.290 }, 00:09:13.290 "memory_domains": [ 00:09:13.290 { 00:09:13.290 "dma_device_id": "system", 00:09:13.290 "dma_device_type": 1 00:09:13.290 }, 00:09:13.290 { 00:09:13.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.290 "dma_device_type": 2 00:09:13.290 }, 00:09:13.290 { 00:09:13.290 "dma_device_id": "system", 00:09:13.290 "dma_device_type": 1 00:09:13.290 }, 00:09:13.290 { 00:09:13.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.290 "dma_device_type": 2 00:09:13.290 }, 00:09:13.290 { 00:09:13.290 "dma_device_id": "system", 00:09:13.290 "dma_device_type": 1 00:09:13.290 }, 00:09:13.290 { 00:09:13.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.290 "dma_device_type": 2 00:09:13.290 } 00:09:13.290 ], 00:09:13.290 "driver_specific": { 00:09:13.290 "raid": { 00:09:13.290 "uuid": "2e4816d8-45be-48d8-aa7c-4c9edab1757a", 00:09:13.290 "strip_size_kb": 64, 00:09:13.290 "state": "online", 00:09:13.290 "raid_level": "raid0", 00:09:13.290 "superblock": true, 00:09:13.290 "num_base_bdevs": 3, 00:09:13.290 "num_base_bdevs_discovered": 3, 00:09:13.290 "num_base_bdevs_operational": 3, 00:09:13.290 "base_bdevs_list": [ 00:09:13.290 { 00:09:13.290 "name": "NewBaseBdev", 00:09:13.290 "uuid": "29b37a85-d5cd-4552-aa81-e9223e820bc2", 00:09:13.290 "is_configured": true, 00:09:13.290 "data_offset": 2048, 00:09:13.290 "data_size": 63488 00:09:13.290 }, 00:09:13.290 { 00:09:13.290 "name": "BaseBdev2", 00:09:13.290 "uuid": "13d1b071-12ab-4b12-b748-9cd856e16d00", 00:09:13.290 "is_configured": true, 00:09:13.290 "data_offset": 2048, 00:09:13.290 "data_size": 63488 00:09:13.290 }, 00:09:13.290 { 00:09:13.290 "name": "BaseBdev3", 00:09:13.290 "uuid": "5c93db7c-2256-4947-a472-325e5a6721a7", 00:09:13.290 "is_configured": true, 00:09:13.290 "data_offset": 2048, 00:09:13.290 "data_size": 63488 00:09:13.290 } 00:09:13.290 ] 00:09:13.290 } 00:09:13.290 } 00:09:13.290 }' 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:13.290 BaseBdev2 00:09:13.290 BaseBdev3' 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.290 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.547 [2024-12-06 18:06:25.528436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.547 [2024-12-06 18:06:25.528468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.547 [2024-12-06 18:06:25.528564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.547 [2024-12-06 18:06:25.528635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.547 [2024-12-06 18:06:25.528649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64858 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64858 ']' 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64858 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64858 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64858' 00:09:13.547 killing process with pid 64858 00:09:13.547 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64858 00:09:13.547 [2024-12-06 18:06:25.582373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.548 18:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64858 00:09:13.805 [2024-12-06 18:06:25.947769] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.180 18:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:15.180 ************************************ 00:09:15.180 END TEST raid_state_function_test_sb 00:09:15.180 ************************************ 00:09:15.180 00:09:15.180 real 0m11.303s 00:09:15.180 user 0m17.900s 00:09:15.180 sys 0m1.884s 00:09:15.180 18:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.180 18:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.180 18:06:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:15.180 18:06:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:15.180 18:06:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.180 18:06:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.439 ************************************ 00:09:15.439 START TEST raid_superblock_test 00:09:15.439 ************************************ 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65489 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65489 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65489 ']' 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.439 18:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.439 [2024-12-06 18:06:27.464333] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:09:15.439 [2024-12-06 18:06:27.464477] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65489 ] 00:09:15.699 [2024-12-06 18:06:27.645417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.699 [2024-12-06 18:06:27.778338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.000 [2024-12-06 18:06:28.002919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.000 [2024-12-06 18:06:28.002972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.309 malloc1 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.309 [2024-12-06 18:06:28.439213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:16.309 [2024-12-06 18:06:28.439377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.309 [2024-12-06 18:06:28.439429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:16.309 [2024-12-06 18:06:28.439468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.309 [2024-12-06 18:06:28.441966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.309 [2024-12-06 18:06:28.442092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:16.309 pt1 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.309 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.569 malloc2 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.569 [2024-12-06 18:06:28.502254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.569 [2024-12-06 18:06:28.502398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.569 [2024-12-06 18:06:28.502456] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:16.569 [2024-12-06 18:06:28.502502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.569 [2024-12-06 18:06:28.504976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.569 [2024-12-06 18:06:28.505067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.569 pt2 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.569 malloc3 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.569 [2024-12-06 18:06:28.572524] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:16.569 [2024-12-06 18:06:28.572635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.569 [2024-12-06 18:06:28.572680] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:16.569 [2024-12-06 18:06:28.572715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.569 [2024-12-06 18:06:28.574983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.569 [2024-12-06 18:06:28.575075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:16.569 pt3 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.569 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.569 [2024-12-06 18:06:28.580547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.569 [2024-12-06 18:06:28.582705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.569 [2024-12-06 18:06:28.582831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:16.569 [2024-12-06 18:06:28.583087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:16.569 [2024-12-06 18:06:28.583147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:16.569 [2024-12-06 18:06:28.583475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.569 [2024-12-06 18:06:28.583702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:16.569 [2024-12-06 18:06:28.583747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:16.569 [2024-12-06 18:06:28.583971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.570 "name": "raid_bdev1", 00:09:16.570 "uuid": "3f34d4f4-ca5d-4db5-8df6-35da34850f90", 00:09:16.570 "strip_size_kb": 64, 00:09:16.570 "state": "online", 00:09:16.570 "raid_level": "raid0", 00:09:16.570 "superblock": true, 00:09:16.570 "num_base_bdevs": 3, 00:09:16.570 "num_base_bdevs_discovered": 3, 00:09:16.570 "num_base_bdevs_operational": 3, 00:09:16.570 "base_bdevs_list": [ 00:09:16.570 { 00:09:16.570 "name": "pt1", 00:09:16.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.570 "is_configured": true, 00:09:16.570 "data_offset": 2048, 00:09:16.570 "data_size": 63488 00:09:16.570 }, 00:09:16.570 { 00:09:16.570 "name": "pt2", 00:09:16.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.570 "is_configured": true, 00:09:16.570 "data_offset": 2048, 00:09:16.570 "data_size": 63488 00:09:16.570 }, 00:09:16.570 { 00:09:16.570 "name": "pt3", 00:09:16.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.570 "is_configured": true, 00:09:16.570 "data_offset": 2048, 00:09:16.570 "data_size": 63488 00:09:16.570 } 00:09:16.570 ] 00:09:16.570 }' 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.570 18:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.139 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.139 [2024-12-06 18:06:29.076059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.140 "name": "raid_bdev1", 00:09:17.140 "aliases": [ 00:09:17.140 "3f34d4f4-ca5d-4db5-8df6-35da34850f90" 00:09:17.140 ], 00:09:17.140 "product_name": "Raid Volume", 00:09:17.140 "block_size": 512, 00:09:17.140 "num_blocks": 190464, 00:09:17.140 "uuid": "3f34d4f4-ca5d-4db5-8df6-35da34850f90", 00:09:17.140 "assigned_rate_limits": { 00:09:17.140 "rw_ios_per_sec": 0, 00:09:17.140 "rw_mbytes_per_sec": 0, 00:09:17.140 "r_mbytes_per_sec": 0, 00:09:17.140 "w_mbytes_per_sec": 0 00:09:17.140 }, 00:09:17.140 "claimed": false, 00:09:17.140 "zoned": false, 00:09:17.140 "supported_io_types": { 00:09:17.140 "read": true, 00:09:17.140 "write": true, 00:09:17.140 "unmap": true, 00:09:17.140 "flush": true, 00:09:17.140 "reset": true, 00:09:17.140 "nvme_admin": false, 00:09:17.140 "nvme_io": false, 00:09:17.140 "nvme_io_md": false, 00:09:17.140 "write_zeroes": true, 00:09:17.140 "zcopy": false, 00:09:17.140 "get_zone_info": false, 00:09:17.140 "zone_management": false, 00:09:17.140 "zone_append": false, 00:09:17.140 "compare": false, 00:09:17.140 "compare_and_write": false, 00:09:17.140 "abort": false, 00:09:17.140 "seek_hole": false, 00:09:17.140 "seek_data": false, 00:09:17.140 "copy": false, 00:09:17.140 "nvme_iov_md": false 00:09:17.140 }, 00:09:17.140 "memory_domains": [ 00:09:17.140 { 00:09:17.140 "dma_device_id": "system", 00:09:17.140 "dma_device_type": 1 00:09:17.140 }, 00:09:17.140 { 00:09:17.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.140 "dma_device_type": 2 00:09:17.140 }, 00:09:17.140 { 00:09:17.140 "dma_device_id": "system", 00:09:17.140 "dma_device_type": 1 00:09:17.140 }, 00:09:17.140 { 00:09:17.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.140 "dma_device_type": 2 00:09:17.140 }, 00:09:17.140 { 00:09:17.140 "dma_device_id": "system", 00:09:17.140 "dma_device_type": 1 00:09:17.140 }, 00:09:17.140 { 00:09:17.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.140 "dma_device_type": 2 00:09:17.140 } 00:09:17.140 ], 00:09:17.140 "driver_specific": { 00:09:17.140 "raid": { 00:09:17.140 "uuid": "3f34d4f4-ca5d-4db5-8df6-35da34850f90", 00:09:17.140 "strip_size_kb": 64, 00:09:17.140 "state": "online", 00:09:17.140 "raid_level": "raid0", 00:09:17.140 "superblock": true, 00:09:17.140 "num_base_bdevs": 3, 00:09:17.140 "num_base_bdevs_discovered": 3, 00:09:17.140 "num_base_bdevs_operational": 3, 00:09:17.140 "base_bdevs_list": [ 00:09:17.140 { 00:09:17.140 "name": "pt1", 00:09:17.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.140 "is_configured": true, 00:09:17.140 "data_offset": 2048, 00:09:17.140 "data_size": 63488 00:09:17.140 }, 00:09:17.140 { 00:09:17.140 "name": "pt2", 00:09:17.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.140 "is_configured": true, 00:09:17.140 "data_offset": 2048, 00:09:17.140 "data_size": 63488 00:09:17.140 }, 00:09:17.140 { 00:09:17.140 "name": "pt3", 00:09:17.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.140 "is_configured": true, 00:09:17.140 "data_offset": 2048, 00:09:17.140 "data_size": 63488 00:09:17.140 } 00:09:17.140 ] 00:09:17.140 } 00:09:17.140 } 00:09:17.140 }' 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:17.140 pt2 00:09:17.140 pt3' 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.140 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.399 [2024-12-06 18:06:29.359641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3f34d4f4-ca5d-4db5-8df6-35da34850f90 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3f34d4f4-ca5d-4db5-8df6-35da34850f90 ']' 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.399 [2024-12-06 18:06:29.407222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.399 [2024-12-06 18:06:29.407306] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.399 [2024-12-06 18:06:29.407455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.399 [2024-12-06 18:06:29.407565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.399 [2024-12-06 18:06:29.407615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.399 [2024-12-06 18:06:29.543061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:17.399 [2024-12-06 18:06:29.545251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:17.399 [2024-12-06 18:06:29.545384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:17.399 [2024-12-06 18:06:29.545451] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:17.399 [2024-12-06 18:06:29.545521] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:17.399 [2024-12-06 18:06:29.545549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:17.399 [2024-12-06 18:06:29.545569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.399 [2024-12-06 18:06:29.545583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:17.399 request: 00:09:17.399 { 00:09:17.399 "name": "raid_bdev1", 00:09:17.399 "raid_level": "raid0", 00:09:17.399 "base_bdevs": [ 00:09:17.399 "malloc1", 00:09:17.399 "malloc2", 00:09:17.399 "malloc3" 00:09:17.399 ], 00:09:17.399 "strip_size_kb": 64, 00:09:17.399 "superblock": false, 00:09:17.399 "method": "bdev_raid_create", 00:09:17.399 "req_id": 1 00:09:17.399 } 00:09:17.399 Got JSON-RPC error response 00:09:17.399 response: 00:09:17.399 { 00:09:17.399 "code": -17, 00:09:17.399 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:17.399 } 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.399 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.658 [2024-12-06 18:06:29.598930] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.658 [2024-12-06 18:06:29.599070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.658 [2024-12-06 18:06:29.599118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:17.658 [2024-12-06 18:06:29.599169] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.658 [2024-12-06 18:06:29.601796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.658 [2024-12-06 18:06:29.601891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.658 [2024-12-06 18:06:29.602049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:17.658 [2024-12-06 18:06:29.602164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:17.658 pt1 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.658 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.659 "name": "raid_bdev1", 00:09:17.659 "uuid": "3f34d4f4-ca5d-4db5-8df6-35da34850f90", 00:09:17.659 "strip_size_kb": 64, 00:09:17.659 "state": "configuring", 00:09:17.659 "raid_level": "raid0", 00:09:17.659 "superblock": true, 00:09:17.659 "num_base_bdevs": 3, 00:09:17.659 "num_base_bdevs_discovered": 1, 00:09:17.659 "num_base_bdevs_operational": 3, 00:09:17.659 "base_bdevs_list": [ 00:09:17.659 { 00:09:17.659 "name": "pt1", 00:09:17.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.659 "is_configured": true, 00:09:17.659 "data_offset": 2048, 00:09:17.659 "data_size": 63488 00:09:17.659 }, 00:09:17.659 { 00:09:17.659 "name": null, 00:09:17.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.659 "is_configured": false, 00:09:17.659 "data_offset": 2048, 00:09:17.659 "data_size": 63488 00:09:17.659 }, 00:09:17.659 { 00:09:17.659 "name": null, 00:09:17.659 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.659 "is_configured": false, 00:09:17.659 "data_offset": 2048, 00:09:17.659 "data_size": 63488 00:09:17.659 } 00:09:17.659 ] 00:09:17.659 }' 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.659 18:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.228 [2024-12-06 18:06:30.098092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.228 [2024-12-06 18:06:30.098167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.228 [2024-12-06 18:06:30.098199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:18.228 [2024-12-06 18:06:30.098210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.228 [2024-12-06 18:06:30.098705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.228 [2024-12-06 18:06:30.098730] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.228 [2024-12-06 18:06:30.098829] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.228 [2024-12-06 18:06:30.098860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.228 pt2 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.228 [2024-12-06 18:06:30.106081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.228 "name": "raid_bdev1", 00:09:18.228 "uuid": "3f34d4f4-ca5d-4db5-8df6-35da34850f90", 00:09:18.228 "strip_size_kb": 64, 00:09:18.228 "state": "configuring", 00:09:18.228 "raid_level": "raid0", 00:09:18.228 "superblock": true, 00:09:18.228 "num_base_bdevs": 3, 00:09:18.228 "num_base_bdevs_discovered": 1, 00:09:18.228 "num_base_bdevs_operational": 3, 00:09:18.228 "base_bdevs_list": [ 00:09:18.228 { 00:09:18.228 "name": "pt1", 00:09:18.228 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.228 "is_configured": true, 00:09:18.228 "data_offset": 2048, 00:09:18.228 "data_size": 63488 00:09:18.228 }, 00:09:18.228 { 00:09:18.228 "name": null, 00:09:18.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.228 "is_configured": false, 00:09:18.228 "data_offset": 0, 00:09:18.228 "data_size": 63488 00:09:18.228 }, 00:09:18.228 { 00:09:18.228 "name": null, 00:09:18.228 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.228 "is_configured": false, 00:09:18.228 "data_offset": 2048, 00:09:18.228 "data_size": 63488 00:09:18.228 } 00:09:18.228 ] 00:09:18.228 }' 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.228 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.488 [2024-12-06 18:06:30.601227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.488 [2024-12-06 18:06:30.601370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.488 [2024-12-06 18:06:30.601424] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:18.488 [2024-12-06 18:06:30.601463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.488 [2024-12-06 18:06:30.602081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.488 [2024-12-06 18:06:30.602151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.488 [2024-12-06 18:06:30.602283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.488 [2024-12-06 18:06:30.602348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.488 pt2 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.488 [2024-12-06 18:06:30.613205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:18.488 [2024-12-06 18:06:30.613308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.488 [2024-12-06 18:06:30.613346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:18.488 [2024-12-06 18:06:30.613394] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.488 [2024-12-06 18:06:30.613900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.488 [2024-12-06 18:06:30.613973] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:18.488 [2024-12-06 18:06:30.614096] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:18.488 [2024-12-06 18:06:30.614160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:18.488 [2024-12-06 18:06:30.614354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:18.488 [2024-12-06 18:06:30.614401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.488 [2024-12-06 18:06:30.614720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:18.488 [2024-12-06 18:06:30.614926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:18.488 [2024-12-06 18:06:30.614970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:18.488 [2024-12-06 18:06:30.615209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.488 pt3 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.488 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.748 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.748 "name": "raid_bdev1", 00:09:18.748 "uuid": "3f34d4f4-ca5d-4db5-8df6-35da34850f90", 00:09:18.748 "strip_size_kb": 64, 00:09:18.748 "state": "online", 00:09:18.748 "raid_level": "raid0", 00:09:18.748 "superblock": true, 00:09:18.748 "num_base_bdevs": 3, 00:09:18.748 "num_base_bdevs_discovered": 3, 00:09:18.748 "num_base_bdevs_operational": 3, 00:09:18.748 "base_bdevs_list": [ 00:09:18.748 { 00:09:18.748 "name": "pt1", 00:09:18.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.748 "is_configured": true, 00:09:18.748 "data_offset": 2048, 00:09:18.748 "data_size": 63488 00:09:18.748 }, 00:09:18.748 { 00:09:18.748 "name": "pt2", 00:09:18.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.748 "is_configured": true, 00:09:18.748 "data_offset": 2048, 00:09:18.748 "data_size": 63488 00:09:18.748 }, 00:09:18.748 { 00:09:18.748 "name": "pt3", 00:09:18.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.748 "is_configured": true, 00:09:18.748 "data_offset": 2048, 00:09:18.748 "data_size": 63488 00:09:18.748 } 00:09:18.748 ] 00:09:18.748 }' 00:09:18.748 18:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.748 18:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.007 [2024-12-06 18:06:31.061009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.007 "name": "raid_bdev1", 00:09:19.007 "aliases": [ 00:09:19.007 "3f34d4f4-ca5d-4db5-8df6-35da34850f90" 00:09:19.007 ], 00:09:19.007 "product_name": "Raid Volume", 00:09:19.007 "block_size": 512, 00:09:19.007 "num_blocks": 190464, 00:09:19.007 "uuid": "3f34d4f4-ca5d-4db5-8df6-35da34850f90", 00:09:19.007 "assigned_rate_limits": { 00:09:19.007 "rw_ios_per_sec": 0, 00:09:19.007 "rw_mbytes_per_sec": 0, 00:09:19.007 "r_mbytes_per_sec": 0, 00:09:19.007 "w_mbytes_per_sec": 0 00:09:19.007 }, 00:09:19.007 "claimed": false, 00:09:19.007 "zoned": false, 00:09:19.007 "supported_io_types": { 00:09:19.007 "read": true, 00:09:19.007 "write": true, 00:09:19.007 "unmap": true, 00:09:19.007 "flush": true, 00:09:19.007 "reset": true, 00:09:19.007 "nvme_admin": false, 00:09:19.007 "nvme_io": false, 00:09:19.007 "nvme_io_md": false, 00:09:19.007 "write_zeroes": true, 00:09:19.007 "zcopy": false, 00:09:19.007 "get_zone_info": false, 00:09:19.007 "zone_management": false, 00:09:19.007 "zone_append": false, 00:09:19.007 "compare": false, 00:09:19.007 "compare_and_write": false, 00:09:19.007 "abort": false, 00:09:19.007 "seek_hole": false, 00:09:19.007 "seek_data": false, 00:09:19.007 "copy": false, 00:09:19.007 "nvme_iov_md": false 00:09:19.007 }, 00:09:19.007 "memory_domains": [ 00:09:19.007 { 00:09:19.007 "dma_device_id": "system", 00:09:19.007 "dma_device_type": 1 00:09:19.007 }, 00:09:19.007 { 00:09:19.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.007 "dma_device_type": 2 00:09:19.007 }, 00:09:19.007 { 00:09:19.007 "dma_device_id": "system", 00:09:19.007 "dma_device_type": 1 00:09:19.007 }, 00:09:19.007 { 00:09:19.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.007 "dma_device_type": 2 00:09:19.007 }, 00:09:19.007 { 00:09:19.007 "dma_device_id": "system", 00:09:19.007 "dma_device_type": 1 00:09:19.007 }, 00:09:19.007 { 00:09:19.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.007 "dma_device_type": 2 00:09:19.007 } 00:09:19.007 ], 00:09:19.007 "driver_specific": { 00:09:19.007 "raid": { 00:09:19.007 "uuid": "3f34d4f4-ca5d-4db5-8df6-35da34850f90", 00:09:19.007 "strip_size_kb": 64, 00:09:19.007 "state": "online", 00:09:19.007 "raid_level": "raid0", 00:09:19.007 "superblock": true, 00:09:19.007 "num_base_bdevs": 3, 00:09:19.007 "num_base_bdevs_discovered": 3, 00:09:19.007 "num_base_bdevs_operational": 3, 00:09:19.007 "base_bdevs_list": [ 00:09:19.007 { 00:09:19.007 "name": "pt1", 00:09:19.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.007 "is_configured": true, 00:09:19.007 "data_offset": 2048, 00:09:19.007 "data_size": 63488 00:09:19.007 }, 00:09:19.007 { 00:09:19.007 "name": "pt2", 00:09:19.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.007 "is_configured": true, 00:09:19.007 "data_offset": 2048, 00:09:19.007 "data_size": 63488 00:09:19.007 }, 00:09:19.007 { 00:09:19.007 "name": "pt3", 00:09:19.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.007 "is_configured": true, 00:09:19.007 "data_offset": 2048, 00:09:19.007 "data_size": 63488 00:09:19.007 } 00:09:19.007 ] 00:09:19.007 } 00:09:19.007 } 00:09:19.007 }' 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:19.007 pt2 00:09:19.007 pt3' 00:09:19.007 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.268 [2024-12-06 18:06:31.340538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3f34d4f4-ca5d-4db5-8df6-35da34850f90 '!=' 3f34d4f4-ca5d-4db5-8df6-35da34850f90 ']' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65489 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65489 ']' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65489 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65489 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65489' 00:09:19.268 killing process with pid 65489 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65489 00:09:19.268 [2024-12-06 18:06:31.403387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.268 [2024-12-06 18:06:31.403587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.268 18:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65489 00:09:19.268 [2024-12-06 18:06:31.403702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.268 [2024-12-06 18:06:31.403762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:19.837 [2024-12-06 18:06:31.765852] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.214 18:06:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:21.214 00:09:21.214 real 0m5.755s 00:09:21.214 user 0m8.207s 00:09:21.214 sys 0m0.950s 00:09:21.214 18:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.214 18:06:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.214 ************************************ 00:09:21.214 END TEST raid_superblock_test 00:09:21.214 ************************************ 00:09:21.214 18:06:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:21.214 18:06:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.214 18:06:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.214 18:06:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.214 ************************************ 00:09:21.214 START TEST raid_read_error_test 00:09:21.214 ************************************ 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.V92RfXC04G 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65748 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65748 00:09:21.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65748 ']' 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.214 18:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.214 [2024-12-06 18:06:33.305009] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:09:21.214 [2024-12-06 18:06:33.305179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65748 ] 00:09:21.473 [2024-12-06 18:06:33.485301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.473 [2024-12-06 18:06:33.624744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.733 [2024-12-06 18:06:33.867721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.733 [2024-12-06 18:06:33.867776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 BaseBdev1_malloc 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 true 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 [2024-12-06 18:06:34.302821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.302 [2024-12-06 18:06:34.302920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.302 [2024-12-06 18:06:34.302945] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:22.302 [2024-12-06 18:06:34.302958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.302 [2024-12-06 18:06:34.305401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.302 [2024-12-06 18:06:34.305451] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.302 BaseBdev1 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 BaseBdev2_malloc 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 true 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 [2024-12-06 18:06:34.376464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.302 [2024-12-06 18:06:34.376660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.302 [2024-12-06 18:06:34.376691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.302 [2024-12-06 18:06:34.376704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.302 [2024-12-06 18:06:34.379209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.302 [2024-12-06 18:06:34.379253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.302 BaseBdev2 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 BaseBdev3_malloc 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 true 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.302 [2024-12-06 18:06:34.457435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:22.302 [2024-12-06 18:06:34.457489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.302 [2024-12-06 18:06:34.457508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:22.302 [2024-12-06 18:06:34.457518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.302 [2024-12-06 18:06:34.459619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.302 [2024-12-06 18:06:34.459724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:22.302 BaseBdev3 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.302 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.561 [2024-12-06 18:06:34.469508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.561 [2024-12-06 18:06:34.471380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.561 [2024-12-06 18:06:34.471462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.561 [2024-12-06 18:06:34.471680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:22.561 [2024-12-06 18:06:34.471697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.561 [2024-12-06 18:06:34.471971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:22.561 [2024-12-06 18:06:34.472148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:22.561 [2024-12-06 18:06:34.472164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:22.561 [2024-12-06 18:06:34.472322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.561 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.562 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.562 "name": "raid_bdev1", 00:09:22.562 "uuid": "3645c402-a813-42db-a903-7423ed8081a4", 00:09:22.562 "strip_size_kb": 64, 00:09:22.562 "state": "online", 00:09:22.562 "raid_level": "raid0", 00:09:22.562 "superblock": true, 00:09:22.562 "num_base_bdevs": 3, 00:09:22.562 "num_base_bdevs_discovered": 3, 00:09:22.562 "num_base_bdevs_operational": 3, 00:09:22.562 "base_bdevs_list": [ 00:09:22.562 { 00:09:22.562 "name": "BaseBdev1", 00:09:22.562 "uuid": "598acd09-6b00-506c-87eb-c57dbac29b65", 00:09:22.562 "is_configured": true, 00:09:22.562 "data_offset": 2048, 00:09:22.562 "data_size": 63488 00:09:22.562 }, 00:09:22.562 { 00:09:22.562 "name": "BaseBdev2", 00:09:22.562 "uuid": "84e73108-7fc1-5ff5-9863-b75d93cfdb72", 00:09:22.562 "is_configured": true, 00:09:22.562 "data_offset": 2048, 00:09:22.562 "data_size": 63488 00:09:22.562 }, 00:09:22.562 { 00:09:22.562 "name": "BaseBdev3", 00:09:22.562 "uuid": "88a48552-e609-5e6a-9dec-443e27f55f9f", 00:09:22.562 "is_configured": true, 00:09:22.562 "data_offset": 2048, 00:09:22.562 "data_size": 63488 00:09:22.562 } 00:09:22.562 ] 00:09:22.562 }' 00:09:22.562 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.562 18:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.820 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:22.820 18:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:23.094 [2024-12-06 18:06:34.998173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:24.033 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:24.033 18:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.033 18:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.033 18:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.033 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:24.033 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:24.033 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.034 "name": "raid_bdev1", 00:09:24.034 "uuid": "3645c402-a813-42db-a903-7423ed8081a4", 00:09:24.034 "strip_size_kb": 64, 00:09:24.034 "state": "online", 00:09:24.034 "raid_level": "raid0", 00:09:24.034 "superblock": true, 00:09:24.034 "num_base_bdevs": 3, 00:09:24.034 "num_base_bdevs_discovered": 3, 00:09:24.034 "num_base_bdevs_operational": 3, 00:09:24.034 "base_bdevs_list": [ 00:09:24.034 { 00:09:24.034 "name": "BaseBdev1", 00:09:24.034 "uuid": "598acd09-6b00-506c-87eb-c57dbac29b65", 00:09:24.034 "is_configured": true, 00:09:24.034 "data_offset": 2048, 00:09:24.034 "data_size": 63488 00:09:24.034 }, 00:09:24.034 { 00:09:24.034 "name": "BaseBdev2", 00:09:24.034 "uuid": "84e73108-7fc1-5ff5-9863-b75d93cfdb72", 00:09:24.034 "is_configured": true, 00:09:24.034 "data_offset": 2048, 00:09:24.034 "data_size": 63488 00:09:24.034 }, 00:09:24.034 { 00:09:24.034 "name": "BaseBdev3", 00:09:24.034 "uuid": "88a48552-e609-5e6a-9dec-443e27f55f9f", 00:09:24.034 "is_configured": true, 00:09:24.034 "data_offset": 2048, 00:09:24.034 "data_size": 63488 00:09:24.034 } 00:09:24.034 ] 00:09:24.034 }' 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.034 18:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.291 18:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.292 [2024-12-06 18:06:36.339246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.292 [2024-12-06 18:06:36.339364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.292 [2024-12-06 18:06:36.342587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.292 [2024-12-06 18:06:36.342681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.292 [2024-12-06 18:06:36.342747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.292 [2024-12-06 18:06:36.342797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:24.292 { 00:09:24.292 "results": [ 00:09:24.292 { 00:09:24.292 "job": "raid_bdev1", 00:09:24.292 "core_mask": "0x1", 00:09:24.292 "workload": "randrw", 00:09:24.292 "percentage": 50, 00:09:24.292 "status": "finished", 00:09:24.292 "queue_depth": 1, 00:09:24.292 "io_size": 131072, 00:09:24.292 "runtime": 1.341653, 00:09:24.292 "iops": 14020.763938216514, 00:09:24.292 "mibps": 1752.5954922770643, 00:09:24.292 "io_failed": 1, 00:09:24.292 "io_timeout": 0, 00:09:24.292 "avg_latency_us": 98.80769368618192, 00:09:24.292 "min_latency_us": 22.358078602620086, 00:09:24.292 "max_latency_us": 1631.2454148471616 00:09:24.292 } 00:09:24.292 ], 00:09:24.292 "core_count": 1 00:09:24.292 } 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65748 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65748 ']' 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65748 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65748 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65748' 00:09:24.292 killing process with pid 65748 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65748 00:09:24.292 [2024-12-06 18:06:36.371143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.292 18:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65748 00:09:24.551 [2024-12-06 18:06:36.632377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.926 18:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:25.926 18:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.V92RfXC04G 00:09:25.926 18:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:25.926 18:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:25.926 18:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:25.926 ************************************ 00:09:25.926 END TEST raid_read_error_test 00:09:25.926 ************************************ 00:09:25.926 18:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.926 18:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.926 18:06:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:25.926 00:09:25.926 real 0m4.821s 00:09:25.926 user 0m5.716s 00:09:25.926 sys 0m0.571s 00:09:25.926 18:06:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.926 18:06:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.926 18:06:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:25.926 18:06:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:25.926 18:06:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.926 18:06:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.926 ************************************ 00:09:25.926 START TEST raid_write_error_test 00:09:25.926 ************************************ 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:25.926 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VX8eh1fxhG 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65892 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65892 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65892 ']' 00:09:25.927 18:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.185 18:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.185 18:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.185 18:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.185 18:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.185 [2024-12-06 18:06:38.185703] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:09:26.185 [2024-12-06 18:06:38.185831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65892 ] 00:09:26.444 [2024-12-06 18:06:38.366490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.444 [2024-12-06 18:06:38.495404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.704 [2024-12-06 18:06:38.720846] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.704 [2024-12-06 18:06:38.720916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.963 BaseBdev1_malloc 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.963 true 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.963 [2024-12-06 18:06:39.102215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:26.963 [2024-12-06 18:06:39.102273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.963 [2024-12-06 18:06:39.102300] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:26.963 [2024-12-06 18:06:39.102315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.963 [2024-12-06 18:06:39.104683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.963 [2024-12-06 18:06:39.104730] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:26.963 BaseBdev1 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.963 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 BaseBdev2_malloc 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 true 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 [2024-12-06 18:06:39.171559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:27.223 [2024-12-06 18:06:39.171626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.223 [2024-12-06 18:06:39.171656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:27.223 [2024-12-06 18:06:39.171673] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.223 [2024-12-06 18:06:39.174272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.223 [2024-12-06 18:06:39.174318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:27.223 BaseBdev2 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 BaseBdev3_malloc 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.223 true 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:27.223 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.224 [2024-12-06 18:06:39.254579] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:27.224 [2024-12-06 18:06:39.254711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.224 [2024-12-06 18:06:39.254747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:27.224 [2024-12-06 18:06:39.254767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.224 [2024-12-06 18:06:39.257438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.224 [2024-12-06 18:06:39.257485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:27.224 BaseBdev3 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.224 [2024-12-06 18:06:39.262659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.224 [2024-12-06 18:06:39.264809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.224 [2024-12-06 18:06:39.264894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.224 [2024-12-06 18:06:39.265153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:27.224 [2024-12-06 18:06:39.265171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:27.224 [2024-12-06 18:06:39.265466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:27.224 [2024-12-06 18:06:39.265732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:27.224 [2024-12-06 18:06:39.265754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:27.224 [2024-12-06 18:06:39.265954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.224 "name": "raid_bdev1", 00:09:27.224 "uuid": "0087b0fc-db70-4f22-81fe-264db10b9bb7", 00:09:27.224 "strip_size_kb": 64, 00:09:27.224 "state": "online", 00:09:27.224 "raid_level": "raid0", 00:09:27.224 "superblock": true, 00:09:27.224 "num_base_bdevs": 3, 00:09:27.224 "num_base_bdevs_discovered": 3, 00:09:27.224 "num_base_bdevs_operational": 3, 00:09:27.224 "base_bdevs_list": [ 00:09:27.224 { 00:09:27.224 "name": "BaseBdev1", 00:09:27.224 "uuid": "a43a29b8-aa27-53bd-94aa-d3ac722d464f", 00:09:27.224 "is_configured": true, 00:09:27.224 "data_offset": 2048, 00:09:27.224 "data_size": 63488 00:09:27.224 }, 00:09:27.224 { 00:09:27.224 "name": "BaseBdev2", 00:09:27.224 "uuid": "7dacc662-c8cc-5581-b0f8-f08f8896764a", 00:09:27.224 "is_configured": true, 00:09:27.224 "data_offset": 2048, 00:09:27.224 "data_size": 63488 00:09:27.224 }, 00:09:27.224 { 00:09:27.224 "name": "BaseBdev3", 00:09:27.224 "uuid": "5730b89b-2807-56ed-aba0-ca79f141d6ca", 00:09:27.224 "is_configured": true, 00:09:27.224 "data_offset": 2048, 00:09:27.224 "data_size": 63488 00:09:27.224 } 00:09:27.224 ] 00:09:27.224 }' 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.224 18:06:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.792 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:27.792 18:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:27.792 [2024-12-06 18:06:39.811355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.729 "name": "raid_bdev1", 00:09:28.729 "uuid": "0087b0fc-db70-4f22-81fe-264db10b9bb7", 00:09:28.729 "strip_size_kb": 64, 00:09:28.729 "state": "online", 00:09:28.729 "raid_level": "raid0", 00:09:28.729 "superblock": true, 00:09:28.729 "num_base_bdevs": 3, 00:09:28.729 "num_base_bdevs_discovered": 3, 00:09:28.729 "num_base_bdevs_operational": 3, 00:09:28.729 "base_bdevs_list": [ 00:09:28.729 { 00:09:28.729 "name": "BaseBdev1", 00:09:28.729 "uuid": "a43a29b8-aa27-53bd-94aa-d3ac722d464f", 00:09:28.729 "is_configured": true, 00:09:28.729 "data_offset": 2048, 00:09:28.729 "data_size": 63488 00:09:28.729 }, 00:09:28.729 { 00:09:28.729 "name": "BaseBdev2", 00:09:28.729 "uuid": "7dacc662-c8cc-5581-b0f8-f08f8896764a", 00:09:28.729 "is_configured": true, 00:09:28.729 "data_offset": 2048, 00:09:28.729 "data_size": 63488 00:09:28.729 }, 00:09:28.729 { 00:09:28.729 "name": "BaseBdev3", 00:09:28.729 "uuid": "5730b89b-2807-56ed-aba0-ca79f141d6ca", 00:09:28.729 "is_configured": true, 00:09:28.729 "data_offset": 2048, 00:09:28.729 "data_size": 63488 00:09:28.729 } 00:09:28.729 ] 00:09:28.729 }' 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.729 18:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.298 [2024-12-06 18:06:41.212688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.298 [2024-12-06 18:06:41.212722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.298 [2024-12-06 18:06:41.215870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.298 [2024-12-06 18:06:41.215922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.298 [2024-12-06 18:06:41.215965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.298 [2024-12-06 18:06:41.215976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:29.298 { 00:09:29.298 "results": [ 00:09:29.298 { 00:09:29.298 "job": "raid_bdev1", 00:09:29.298 "core_mask": "0x1", 00:09:29.298 "workload": "randrw", 00:09:29.298 "percentage": 50, 00:09:29.298 "status": "finished", 00:09:29.298 "queue_depth": 1, 00:09:29.298 "io_size": 131072, 00:09:29.298 "runtime": 1.401864, 00:09:29.298 "iops": 13400.015978725469, 00:09:29.298 "mibps": 1675.0019973406836, 00:09:29.298 "io_failed": 1, 00:09:29.298 "io_timeout": 0, 00:09:29.298 "avg_latency_us": 103.20024509564634, 00:09:29.298 "min_latency_us": 22.69344978165939, 00:09:29.298 "max_latency_us": 1681.3275109170306 00:09:29.298 } 00:09:29.298 ], 00:09:29.298 "core_count": 1 00:09:29.298 } 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65892 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65892 ']' 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65892 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65892 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65892' 00:09:29.298 killing process with pid 65892 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65892 00:09:29.298 [2024-12-06 18:06:41.251970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.298 18:06:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65892 00:09:29.556 [2024-12-06 18:06:41.523512] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.934 18:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VX8eh1fxhG 00:09:30.934 18:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:30.935 18:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:30.935 18:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:30.935 18:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:30.935 18:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.935 18:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.935 18:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:30.935 ************************************ 00:09:30.935 END TEST raid_write_error_test 00:09:30.935 ************************************ 00:09:30.935 00:09:30.935 real 0m4.833s 00:09:30.935 user 0m5.729s 00:09:30.935 sys 0m0.578s 00:09:30.935 18:06:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.935 18:06:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.935 18:06:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:30.935 18:06:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:30.935 18:06:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:30.935 18:06:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.935 18:06:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.935 ************************************ 00:09:30.935 START TEST raid_state_function_test 00:09:30.935 ************************************ 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66037 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66037' 00:09:30.935 Process raid pid: 66037 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66037 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 66037 ']' 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.935 18:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.935 [2024-12-06 18:06:43.083349] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:09:30.935 [2024-12-06 18:06:43.083589] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.194 [2024-12-06 18:06:43.266309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.453 [2024-12-06 18:06:43.396394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.712 [2024-12-06 18:06:43.641932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.712 [2024-12-06 18:06:43.642100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.971 18:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.971 18:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:31.971 18:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.971 18:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.971 18:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.971 [2024-12-06 18:06:43.994198] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.971 [2024-12-06 18:06:43.994263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.971 [2024-12-06 18:06:43.994274] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.971 [2024-12-06 18:06:43.994286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.971 [2024-12-06 18:06:43.994293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.971 [2024-12-06 18:06:43.994303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.971 18:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.971 18:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.971 18:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.971 18:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.971 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.972 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.972 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.972 "name": "Existed_Raid", 00:09:31.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.972 "strip_size_kb": 64, 00:09:31.972 "state": "configuring", 00:09:31.972 "raid_level": "concat", 00:09:31.972 "superblock": false, 00:09:31.972 "num_base_bdevs": 3, 00:09:31.972 "num_base_bdevs_discovered": 0, 00:09:31.972 "num_base_bdevs_operational": 3, 00:09:31.972 "base_bdevs_list": [ 00:09:31.972 { 00:09:31.972 "name": "BaseBdev1", 00:09:31.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.972 "is_configured": false, 00:09:31.972 "data_offset": 0, 00:09:31.972 "data_size": 0 00:09:31.972 }, 00:09:31.972 { 00:09:31.972 "name": "BaseBdev2", 00:09:31.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.972 "is_configured": false, 00:09:31.972 "data_offset": 0, 00:09:31.972 "data_size": 0 00:09:31.972 }, 00:09:31.972 { 00:09:31.972 "name": "BaseBdev3", 00:09:31.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.972 "is_configured": false, 00:09:31.972 "data_offset": 0, 00:09:31.972 "data_size": 0 00:09:31.972 } 00:09:31.972 ] 00:09:31.972 }' 00:09:31.972 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.972 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.540 [2024-12-06 18:06:44.465337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.540 [2024-12-06 18:06:44.465448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.540 [2024-12-06 18:06:44.477329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.540 [2024-12-06 18:06:44.477436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.540 [2024-12-06 18:06:44.477486] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.540 [2024-12-06 18:06:44.477540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.540 [2024-12-06 18:06:44.477580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.540 [2024-12-06 18:06:44.477626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.540 [2024-12-06 18:06:44.532263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.540 BaseBdev1 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.540 [ 00:09:32.540 { 00:09:32.540 "name": "BaseBdev1", 00:09:32.540 "aliases": [ 00:09:32.540 "6c1bd66b-37ce-4f70-83c3-2833ceb064cd" 00:09:32.540 ], 00:09:32.540 "product_name": "Malloc disk", 00:09:32.540 "block_size": 512, 00:09:32.540 "num_blocks": 65536, 00:09:32.540 "uuid": "6c1bd66b-37ce-4f70-83c3-2833ceb064cd", 00:09:32.540 "assigned_rate_limits": { 00:09:32.540 "rw_ios_per_sec": 0, 00:09:32.540 "rw_mbytes_per_sec": 0, 00:09:32.540 "r_mbytes_per_sec": 0, 00:09:32.540 "w_mbytes_per_sec": 0 00:09:32.540 }, 00:09:32.540 "claimed": true, 00:09:32.540 "claim_type": "exclusive_write", 00:09:32.540 "zoned": false, 00:09:32.540 "supported_io_types": { 00:09:32.540 "read": true, 00:09:32.540 "write": true, 00:09:32.540 "unmap": true, 00:09:32.540 "flush": true, 00:09:32.540 "reset": true, 00:09:32.540 "nvme_admin": false, 00:09:32.540 "nvme_io": false, 00:09:32.540 "nvme_io_md": false, 00:09:32.540 "write_zeroes": true, 00:09:32.540 "zcopy": true, 00:09:32.540 "get_zone_info": false, 00:09:32.540 "zone_management": false, 00:09:32.540 "zone_append": false, 00:09:32.540 "compare": false, 00:09:32.540 "compare_and_write": false, 00:09:32.540 "abort": true, 00:09:32.540 "seek_hole": false, 00:09:32.540 "seek_data": false, 00:09:32.540 "copy": true, 00:09:32.540 "nvme_iov_md": false 00:09:32.540 }, 00:09:32.540 "memory_domains": [ 00:09:32.540 { 00:09:32.540 "dma_device_id": "system", 00:09:32.540 "dma_device_type": 1 00:09:32.540 }, 00:09:32.540 { 00:09:32.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.540 "dma_device_type": 2 00:09:32.540 } 00:09:32.540 ], 00:09:32.540 "driver_specific": {} 00:09:32.540 } 00:09:32.540 ] 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.540 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.541 "name": "Existed_Raid", 00:09:32.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.541 "strip_size_kb": 64, 00:09:32.541 "state": "configuring", 00:09:32.541 "raid_level": "concat", 00:09:32.541 "superblock": false, 00:09:32.541 "num_base_bdevs": 3, 00:09:32.541 "num_base_bdevs_discovered": 1, 00:09:32.541 "num_base_bdevs_operational": 3, 00:09:32.541 "base_bdevs_list": [ 00:09:32.541 { 00:09:32.541 "name": "BaseBdev1", 00:09:32.541 "uuid": "6c1bd66b-37ce-4f70-83c3-2833ceb064cd", 00:09:32.541 "is_configured": true, 00:09:32.541 "data_offset": 0, 00:09:32.541 "data_size": 65536 00:09:32.541 }, 00:09:32.541 { 00:09:32.541 "name": "BaseBdev2", 00:09:32.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.541 "is_configured": false, 00:09:32.541 "data_offset": 0, 00:09:32.541 "data_size": 0 00:09:32.541 }, 00:09:32.541 { 00:09:32.541 "name": "BaseBdev3", 00:09:32.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.541 "is_configured": false, 00:09:32.541 "data_offset": 0, 00:09:32.541 "data_size": 0 00:09:32.541 } 00:09:32.541 ] 00:09:32.541 }' 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.541 18:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.107 [2024-12-06 18:06:45.019566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.107 [2024-12-06 18:06:45.019634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.107 [2024-12-06 18:06:45.031587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.107 [2024-12-06 18:06:45.033713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.107 [2024-12-06 18:06:45.033824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.107 [2024-12-06 18:06:45.033878] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.107 [2024-12-06 18:06:45.033930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.107 "name": "Existed_Raid", 00:09:33.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.107 "strip_size_kb": 64, 00:09:33.107 "state": "configuring", 00:09:33.107 "raid_level": "concat", 00:09:33.107 "superblock": false, 00:09:33.107 "num_base_bdevs": 3, 00:09:33.107 "num_base_bdevs_discovered": 1, 00:09:33.107 "num_base_bdevs_operational": 3, 00:09:33.107 "base_bdevs_list": [ 00:09:33.107 { 00:09:33.107 "name": "BaseBdev1", 00:09:33.107 "uuid": "6c1bd66b-37ce-4f70-83c3-2833ceb064cd", 00:09:33.107 "is_configured": true, 00:09:33.107 "data_offset": 0, 00:09:33.107 "data_size": 65536 00:09:33.107 }, 00:09:33.107 { 00:09:33.107 "name": "BaseBdev2", 00:09:33.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.107 "is_configured": false, 00:09:33.107 "data_offset": 0, 00:09:33.107 "data_size": 0 00:09:33.107 }, 00:09:33.107 { 00:09:33.107 "name": "BaseBdev3", 00:09:33.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.107 "is_configured": false, 00:09:33.107 "data_offset": 0, 00:09:33.107 "data_size": 0 00:09:33.107 } 00:09:33.107 ] 00:09:33.107 }' 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.107 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.365 [2024-12-06 18:06:45.495020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.365 BaseBdev2 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.365 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.365 [ 00:09:33.365 { 00:09:33.365 "name": "BaseBdev2", 00:09:33.365 "aliases": [ 00:09:33.365 "6b99a9d7-0192-4c6b-b388-2aeb02911887" 00:09:33.365 ], 00:09:33.365 "product_name": "Malloc disk", 00:09:33.365 "block_size": 512, 00:09:33.365 "num_blocks": 65536, 00:09:33.365 "uuid": "6b99a9d7-0192-4c6b-b388-2aeb02911887", 00:09:33.365 "assigned_rate_limits": { 00:09:33.365 "rw_ios_per_sec": 0, 00:09:33.365 "rw_mbytes_per_sec": 0, 00:09:33.365 "r_mbytes_per_sec": 0, 00:09:33.365 "w_mbytes_per_sec": 0 00:09:33.365 }, 00:09:33.365 "claimed": true, 00:09:33.365 "claim_type": "exclusive_write", 00:09:33.365 "zoned": false, 00:09:33.365 "supported_io_types": { 00:09:33.365 "read": true, 00:09:33.365 "write": true, 00:09:33.365 "unmap": true, 00:09:33.365 "flush": true, 00:09:33.365 "reset": true, 00:09:33.365 "nvme_admin": false, 00:09:33.365 "nvme_io": false, 00:09:33.365 "nvme_io_md": false, 00:09:33.365 "write_zeroes": true, 00:09:33.365 "zcopy": true, 00:09:33.365 "get_zone_info": false, 00:09:33.365 "zone_management": false, 00:09:33.365 "zone_append": false, 00:09:33.365 "compare": false, 00:09:33.365 "compare_and_write": false, 00:09:33.365 "abort": true, 00:09:33.365 "seek_hole": false, 00:09:33.365 "seek_data": false, 00:09:33.365 "copy": true, 00:09:33.365 "nvme_iov_md": false 00:09:33.365 }, 00:09:33.365 "memory_domains": [ 00:09:33.365 { 00:09:33.365 "dma_device_id": "system", 00:09:33.365 "dma_device_type": 1 00:09:33.365 }, 00:09:33.365 { 00:09:33.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.629 "dma_device_type": 2 00:09:33.629 } 00:09:33.629 ], 00:09:33.629 "driver_specific": {} 00:09:33.629 } 00:09:33.629 ] 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.629 "name": "Existed_Raid", 00:09:33.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.629 "strip_size_kb": 64, 00:09:33.629 "state": "configuring", 00:09:33.629 "raid_level": "concat", 00:09:33.629 "superblock": false, 00:09:33.629 "num_base_bdevs": 3, 00:09:33.629 "num_base_bdevs_discovered": 2, 00:09:33.629 "num_base_bdevs_operational": 3, 00:09:33.629 "base_bdevs_list": [ 00:09:33.629 { 00:09:33.629 "name": "BaseBdev1", 00:09:33.629 "uuid": "6c1bd66b-37ce-4f70-83c3-2833ceb064cd", 00:09:33.629 "is_configured": true, 00:09:33.629 "data_offset": 0, 00:09:33.629 "data_size": 65536 00:09:33.629 }, 00:09:33.629 { 00:09:33.629 "name": "BaseBdev2", 00:09:33.629 "uuid": "6b99a9d7-0192-4c6b-b388-2aeb02911887", 00:09:33.629 "is_configured": true, 00:09:33.629 "data_offset": 0, 00:09:33.629 "data_size": 65536 00:09:33.629 }, 00:09:33.629 { 00:09:33.629 "name": "BaseBdev3", 00:09:33.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.629 "is_configured": false, 00:09:33.629 "data_offset": 0, 00:09:33.629 "data_size": 0 00:09:33.629 } 00:09:33.629 ] 00:09:33.629 }' 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.629 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.894 18:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.894 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.894 18:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.894 [2024-12-06 18:06:46.001749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.894 [2024-12-06 18:06:46.001915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:33.894 [2024-12-06 18:06:46.001964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:33.894 [2024-12-06 18:06:46.002387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:33.894 [2024-12-06 18:06:46.002644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:33.894 [2024-12-06 18:06:46.002699] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:33.894 [2024-12-06 18:06:46.003095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.894 BaseBdev3 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.894 [ 00:09:33.894 { 00:09:33.894 "name": "BaseBdev3", 00:09:33.894 "aliases": [ 00:09:33.894 "b22c4697-d022-42ae-a22a-8278dd97847f" 00:09:33.894 ], 00:09:33.894 "product_name": "Malloc disk", 00:09:33.894 "block_size": 512, 00:09:33.894 "num_blocks": 65536, 00:09:33.894 "uuid": "b22c4697-d022-42ae-a22a-8278dd97847f", 00:09:33.894 "assigned_rate_limits": { 00:09:33.894 "rw_ios_per_sec": 0, 00:09:33.894 "rw_mbytes_per_sec": 0, 00:09:33.894 "r_mbytes_per_sec": 0, 00:09:33.894 "w_mbytes_per_sec": 0 00:09:33.894 }, 00:09:33.894 "claimed": true, 00:09:33.894 "claim_type": "exclusive_write", 00:09:33.894 "zoned": false, 00:09:33.894 "supported_io_types": { 00:09:33.894 "read": true, 00:09:33.894 "write": true, 00:09:33.894 "unmap": true, 00:09:33.894 "flush": true, 00:09:33.894 "reset": true, 00:09:33.894 "nvme_admin": false, 00:09:33.894 "nvme_io": false, 00:09:33.894 "nvme_io_md": false, 00:09:33.894 "write_zeroes": true, 00:09:33.894 "zcopy": true, 00:09:33.894 "get_zone_info": false, 00:09:33.894 "zone_management": false, 00:09:33.894 "zone_append": false, 00:09:33.894 "compare": false, 00:09:33.894 "compare_and_write": false, 00:09:33.894 "abort": true, 00:09:33.894 "seek_hole": false, 00:09:33.894 "seek_data": false, 00:09:33.894 "copy": true, 00:09:33.894 "nvme_iov_md": false 00:09:33.894 }, 00:09:33.894 "memory_domains": [ 00:09:33.894 { 00:09:33.894 "dma_device_id": "system", 00:09:33.894 "dma_device_type": 1 00:09:33.894 }, 00:09:33.894 { 00:09:33.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.894 "dma_device_type": 2 00:09:33.894 } 00:09:33.894 ], 00:09:33.894 "driver_specific": {} 00:09:33.894 } 00:09:33.894 ] 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.894 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.895 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.154 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.154 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.154 "name": "Existed_Raid", 00:09:34.154 "uuid": "3600901b-b78d-4301-9269-76a7b881e4e8", 00:09:34.154 "strip_size_kb": 64, 00:09:34.154 "state": "online", 00:09:34.154 "raid_level": "concat", 00:09:34.154 "superblock": false, 00:09:34.154 "num_base_bdevs": 3, 00:09:34.154 "num_base_bdevs_discovered": 3, 00:09:34.154 "num_base_bdevs_operational": 3, 00:09:34.154 "base_bdevs_list": [ 00:09:34.154 { 00:09:34.154 "name": "BaseBdev1", 00:09:34.154 "uuid": "6c1bd66b-37ce-4f70-83c3-2833ceb064cd", 00:09:34.154 "is_configured": true, 00:09:34.154 "data_offset": 0, 00:09:34.154 "data_size": 65536 00:09:34.154 }, 00:09:34.154 { 00:09:34.154 "name": "BaseBdev2", 00:09:34.154 "uuid": "6b99a9d7-0192-4c6b-b388-2aeb02911887", 00:09:34.154 "is_configured": true, 00:09:34.154 "data_offset": 0, 00:09:34.154 "data_size": 65536 00:09:34.154 }, 00:09:34.154 { 00:09:34.154 "name": "BaseBdev3", 00:09:34.154 "uuid": "b22c4697-d022-42ae-a22a-8278dd97847f", 00:09:34.154 "is_configured": true, 00:09:34.154 "data_offset": 0, 00:09:34.154 "data_size": 65536 00:09:34.154 } 00:09:34.154 ] 00:09:34.154 }' 00:09:34.154 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.154 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.415 [2024-12-06 18:06:46.529288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.415 "name": "Existed_Raid", 00:09:34.415 "aliases": [ 00:09:34.415 "3600901b-b78d-4301-9269-76a7b881e4e8" 00:09:34.415 ], 00:09:34.415 "product_name": "Raid Volume", 00:09:34.415 "block_size": 512, 00:09:34.415 "num_blocks": 196608, 00:09:34.415 "uuid": "3600901b-b78d-4301-9269-76a7b881e4e8", 00:09:34.415 "assigned_rate_limits": { 00:09:34.415 "rw_ios_per_sec": 0, 00:09:34.415 "rw_mbytes_per_sec": 0, 00:09:34.415 "r_mbytes_per_sec": 0, 00:09:34.415 "w_mbytes_per_sec": 0 00:09:34.415 }, 00:09:34.415 "claimed": false, 00:09:34.415 "zoned": false, 00:09:34.415 "supported_io_types": { 00:09:34.415 "read": true, 00:09:34.415 "write": true, 00:09:34.415 "unmap": true, 00:09:34.415 "flush": true, 00:09:34.415 "reset": true, 00:09:34.415 "nvme_admin": false, 00:09:34.415 "nvme_io": false, 00:09:34.415 "nvme_io_md": false, 00:09:34.415 "write_zeroes": true, 00:09:34.415 "zcopy": false, 00:09:34.415 "get_zone_info": false, 00:09:34.415 "zone_management": false, 00:09:34.415 "zone_append": false, 00:09:34.415 "compare": false, 00:09:34.415 "compare_and_write": false, 00:09:34.415 "abort": false, 00:09:34.415 "seek_hole": false, 00:09:34.415 "seek_data": false, 00:09:34.415 "copy": false, 00:09:34.415 "nvme_iov_md": false 00:09:34.415 }, 00:09:34.415 "memory_domains": [ 00:09:34.415 { 00:09:34.415 "dma_device_id": "system", 00:09:34.415 "dma_device_type": 1 00:09:34.415 }, 00:09:34.415 { 00:09:34.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.415 "dma_device_type": 2 00:09:34.415 }, 00:09:34.415 { 00:09:34.415 "dma_device_id": "system", 00:09:34.415 "dma_device_type": 1 00:09:34.415 }, 00:09:34.415 { 00:09:34.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.415 "dma_device_type": 2 00:09:34.415 }, 00:09:34.415 { 00:09:34.415 "dma_device_id": "system", 00:09:34.415 "dma_device_type": 1 00:09:34.415 }, 00:09:34.415 { 00:09:34.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.415 "dma_device_type": 2 00:09:34.415 } 00:09:34.415 ], 00:09:34.415 "driver_specific": { 00:09:34.415 "raid": { 00:09:34.415 "uuid": "3600901b-b78d-4301-9269-76a7b881e4e8", 00:09:34.415 "strip_size_kb": 64, 00:09:34.415 "state": "online", 00:09:34.415 "raid_level": "concat", 00:09:34.415 "superblock": false, 00:09:34.415 "num_base_bdevs": 3, 00:09:34.415 "num_base_bdevs_discovered": 3, 00:09:34.415 "num_base_bdevs_operational": 3, 00:09:34.415 "base_bdevs_list": [ 00:09:34.415 { 00:09:34.415 "name": "BaseBdev1", 00:09:34.415 "uuid": "6c1bd66b-37ce-4f70-83c3-2833ceb064cd", 00:09:34.415 "is_configured": true, 00:09:34.415 "data_offset": 0, 00:09:34.415 "data_size": 65536 00:09:34.415 }, 00:09:34.415 { 00:09:34.415 "name": "BaseBdev2", 00:09:34.415 "uuid": "6b99a9d7-0192-4c6b-b388-2aeb02911887", 00:09:34.415 "is_configured": true, 00:09:34.415 "data_offset": 0, 00:09:34.415 "data_size": 65536 00:09:34.415 }, 00:09:34.415 { 00:09:34.415 "name": "BaseBdev3", 00:09:34.415 "uuid": "b22c4697-d022-42ae-a22a-8278dd97847f", 00:09:34.415 "is_configured": true, 00:09:34.415 "data_offset": 0, 00:09:34.415 "data_size": 65536 00:09:34.415 } 00:09:34.415 ] 00:09:34.415 } 00:09:34.415 } 00:09:34.415 }' 00:09:34.415 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:34.676 BaseBdev2 00:09:34.676 BaseBdev3' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.676 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.676 [2024-12-06 18:06:46.792550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.676 [2024-12-06 18:06:46.792582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.676 [2024-12-06 18:06:46.792639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.937 "name": "Existed_Raid", 00:09:34.937 "uuid": "3600901b-b78d-4301-9269-76a7b881e4e8", 00:09:34.937 "strip_size_kb": 64, 00:09:34.937 "state": "offline", 00:09:34.937 "raid_level": "concat", 00:09:34.937 "superblock": false, 00:09:34.937 "num_base_bdevs": 3, 00:09:34.937 "num_base_bdevs_discovered": 2, 00:09:34.937 "num_base_bdevs_operational": 2, 00:09:34.937 "base_bdevs_list": [ 00:09:34.937 { 00:09:34.937 "name": null, 00:09:34.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.937 "is_configured": false, 00:09:34.937 "data_offset": 0, 00:09:34.937 "data_size": 65536 00:09:34.937 }, 00:09:34.937 { 00:09:34.937 "name": "BaseBdev2", 00:09:34.937 "uuid": "6b99a9d7-0192-4c6b-b388-2aeb02911887", 00:09:34.937 "is_configured": true, 00:09:34.937 "data_offset": 0, 00:09:34.937 "data_size": 65536 00:09:34.937 }, 00:09:34.937 { 00:09:34.937 "name": "BaseBdev3", 00:09:34.937 "uuid": "b22c4697-d022-42ae-a22a-8278dd97847f", 00:09:34.937 "is_configured": true, 00:09:34.937 "data_offset": 0, 00:09:34.937 "data_size": 65536 00:09:34.937 } 00:09:34.937 ] 00:09:34.937 }' 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.937 18:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.195 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.195 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.195 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.195 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.195 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.195 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.195 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.454 [2024-12-06 18:06:47.375770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.454 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.454 [2024-12-06 18:06:47.534737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.454 [2024-12-06 18:06:47.534797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.714 BaseBdev2 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.714 [ 00:09:35.714 { 00:09:35.714 "name": "BaseBdev2", 00:09:35.714 "aliases": [ 00:09:35.714 "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f" 00:09:35.714 ], 00:09:35.714 "product_name": "Malloc disk", 00:09:35.714 "block_size": 512, 00:09:35.714 "num_blocks": 65536, 00:09:35.714 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:35.714 "assigned_rate_limits": { 00:09:35.714 "rw_ios_per_sec": 0, 00:09:35.714 "rw_mbytes_per_sec": 0, 00:09:35.714 "r_mbytes_per_sec": 0, 00:09:35.714 "w_mbytes_per_sec": 0 00:09:35.714 }, 00:09:35.714 "claimed": false, 00:09:35.714 "zoned": false, 00:09:35.714 "supported_io_types": { 00:09:35.714 "read": true, 00:09:35.714 "write": true, 00:09:35.714 "unmap": true, 00:09:35.714 "flush": true, 00:09:35.714 "reset": true, 00:09:35.714 "nvme_admin": false, 00:09:35.714 "nvme_io": false, 00:09:35.714 "nvme_io_md": false, 00:09:35.714 "write_zeroes": true, 00:09:35.714 "zcopy": true, 00:09:35.714 "get_zone_info": false, 00:09:35.714 "zone_management": false, 00:09:35.714 "zone_append": false, 00:09:35.714 "compare": false, 00:09:35.714 "compare_and_write": false, 00:09:35.714 "abort": true, 00:09:35.714 "seek_hole": false, 00:09:35.714 "seek_data": false, 00:09:35.714 "copy": true, 00:09:35.714 "nvme_iov_md": false 00:09:35.714 }, 00:09:35.714 "memory_domains": [ 00:09:35.714 { 00:09:35.714 "dma_device_id": "system", 00:09:35.714 "dma_device_type": 1 00:09:35.714 }, 00:09:35.714 { 00:09:35.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.714 "dma_device_type": 2 00:09:35.714 } 00:09:35.714 ], 00:09:35.714 "driver_specific": {} 00:09:35.714 } 00:09:35.714 ] 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.714 BaseBdev3 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.714 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.715 [ 00:09:35.715 { 00:09:35.715 "name": "BaseBdev3", 00:09:35.715 "aliases": [ 00:09:35.715 "acfd4125-54f5-44fb-a054-91f3650254f6" 00:09:35.715 ], 00:09:35.715 "product_name": "Malloc disk", 00:09:35.715 "block_size": 512, 00:09:35.715 "num_blocks": 65536, 00:09:35.715 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:35.715 "assigned_rate_limits": { 00:09:35.715 "rw_ios_per_sec": 0, 00:09:35.715 "rw_mbytes_per_sec": 0, 00:09:35.715 "r_mbytes_per_sec": 0, 00:09:35.715 "w_mbytes_per_sec": 0 00:09:35.715 }, 00:09:35.715 "claimed": false, 00:09:35.715 "zoned": false, 00:09:35.715 "supported_io_types": { 00:09:35.715 "read": true, 00:09:35.715 "write": true, 00:09:35.715 "unmap": true, 00:09:35.715 "flush": true, 00:09:35.715 "reset": true, 00:09:35.715 "nvme_admin": false, 00:09:35.715 "nvme_io": false, 00:09:35.715 "nvme_io_md": false, 00:09:35.715 "write_zeroes": true, 00:09:35.715 "zcopy": true, 00:09:35.715 "get_zone_info": false, 00:09:35.715 "zone_management": false, 00:09:35.715 "zone_append": false, 00:09:35.715 "compare": false, 00:09:35.715 "compare_and_write": false, 00:09:35.715 "abort": true, 00:09:35.715 "seek_hole": false, 00:09:35.715 "seek_data": false, 00:09:35.715 "copy": true, 00:09:35.715 "nvme_iov_md": false 00:09:35.715 }, 00:09:35.715 "memory_domains": [ 00:09:35.715 { 00:09:35.715 "dma_device_id": "system", 00:09:35.715 "dma_device_type": 1 00:09:35.715 }, 00:09:35.715 { 00:09:35.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.715 "dma_device_type": 2 00:09:35.715 } 00:09:35.715 ], 00:09:35.715 "driver_specific": {} 00:09:35.715 } 00:09:35.715 ] 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.715 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.715 [2024-12-06 18:06:47.873371] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.715 [2024-12-06 18:06:47.873417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.715 [2024-12-06 18:06:47.873444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.715 [2024-12-06 18:06:47.875462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.974 "name": "Existed_Raid", 00:09:35.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.974 "strip_size_kb": 64, 00:09:35.974 "state": "configuring", 00:09:35.974 "raid_level": "concat", 00:09:35.974 "superblock": false, 00:09:35.974 "num_base_bdevs": 3, 00:09:35.974 "num_base_bdevs_discovered": 2, 00:09:35.974 "num_base_bdevs_operational": 3, 00:09:35.974 "base_bdevs_list": [ 00:09:35.974 { 00:09:35.974 "name": "BaseBdev1", 00:09:35.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.974 "is_configured": false, 00:09:35.974 "data_offset": 0, 00:09:35.974 "data_size": 0 00:09:35.974 }, 00:09:35.974 { 00:09:35.974 "name": "BaseBdev2", 00:09:35.974 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:35.974 "is_configured": true, 00:09:35.974 "data_offset": 0, 00:09:35.974 "data_size": 65536 00:09:35.974 }, 00:09:35.974 { 00:09:35.974 "name": "BaseBdev3", 00:09:35.974 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:35.974 "is_configured": true, 00:09:35.974 "data_offset": 0, 00:09:35.974 "data_size": 65536 00:09:35.974 } 00:09:35.974 ] 00:09:35.974 }' 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.974 18:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.233 [2024-12-06 18:06:48.388563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.233 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.492 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.492 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.492 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.492 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.492 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.492 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.492 "name": "Existed_Raid", 00:09:36.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.492 "strip_size_kb": 64, 00:09:36.492 "state": "configuring", 00:09:36.492 "raid_level": "concat", 00:09:36.492 "superblock": false, 00:09:36.492 "num_base_bdevs": 3, 00:09:36.492 "num_base_bdevs_discovered": 1, 00:09:36.492 "num_base_bdevs_operational": 3, 00:09:36.492 "base_bdevs_list": [ 00:09:36.492 { 00:09:36.492 "name": "BaseBdev1", 00:09:36.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.492 "is_configured": false, 00:09:36.492 "data_offset": 0, 00:09:36.492 "data_size": 0 00:09:36.492 }, 00:09:36.492 { 00:09:36.492 "name": null, 00:09:36.492 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:36.492 "is_configured": false, 00:09:36.492 "data_offset": 0, 00:09:36.492 "data_size": 65536 00:09:36.492 }, 00:09:36.492 { 00:09:36.492 "name": "BaseBdev3", 00:09:36.492 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:36.492 "is_configured": true, 00:09:36.492 "data_offset": 0, 00:09:36.492 "data_size": 65536 00:09:36.492 } 00:09:36.492 ] 00:09:36.492 }' 00:09:36.492 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.492 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.751 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.751 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.751 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.751 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.751 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.751 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:36.751 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.751 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.751 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.009 [2024-12-06 18:06:48.934764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.009 BaseBdev1 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.009 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.009 [ 00:09:37.009 { 00:09:37.009 "name": "BaseBdev1", 00:09:37.009 "aliases": [ 00:09:37.009 "8d9832b6-eca5-4149-b95b-a70dab6a6b97" 00:09:37.009 ], 00:09:37.009 "product_name": "Malloc disk", 00:09:37.009 "block_size": 512, 00:09:37.009 "num_blocks": 65536, 00:09:37.009 "uuid": "8d9832b6-eca5-4149-b95b-a70dab6a6b97", 00:09:37.009 "assigned_rate_limits": { 00:09:37.009 "rw_ios_per_sec": 0, 00:09:37.009 "rw_mbytes_per_sec": 0, 00:09:37.009 "r_mbytes_per_sec": 0, 00:09:37.009 "w_mbytes_per_sec": 0 00:09:37.009 }, 00:09:37.009 "claimed": true, 00:09:37.009 "claim_type": "exclusive_write", 00:09:37.009 "zoned": false, 00:09:37.009 "supported_io_types": { 00:09:37.009 "read": true, 00:09:37.009 "write": true, 00:09:37.009 "unmap": true, 00:09:37.009 "flush": true, 00:09:37.009 "reset": true, 00:09:37.009 "nvme_admin": false, 00:09:37.009 "nvme_io": false, 00:09:37.009 "nvme_io_md": false, 00:09:37.009 "write_zeroes": true, 00:09:37.009 "zcopy": true, 00:09:37.009 "get_zone_info": false, 00:09:37.009 "zone_management": false, 00:09:37.009 "zone_append": false, 00:09:37.009 "compare": false, 00:09:37.009 "compare_and_write": false, 00:09:37.009 "abort": true, 00:09:37.009 "seek_hole": false, 00:09:37.009 "seek_data": false, 00:09:37.009 "copy": true, 00:09:37.009 "nvme_iov_md": false 00:09:37.009 }, 00:09:37.009 "memory_domains": [ 00:09:37.009 { 00:09:37.009 "dma_device_id": "system", 00:09:37.009 "dma_device_type": 1 00:09:37.009 }, 00:09:37.010 { 00:09:37.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.010 "dma_device_type": 2 00:09:37.010 } 00:09:37.010 ], 00:09:37.010 "driver_specific": {} 00:09:37.010 } 00:09:37.010 ] 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.010 18:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.010 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.010 "name": "Existed_Raid", 00:09:37.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.010 "strip_size_kb": 64, 00:09:37.010 "state": "configuring", 00:09:37.010 "raid_level": "concat", 00:09:37.010 "superblock": false, 00:09:37.010 "num_base_bdevs": 3, 00:09:37.010 "num_base_bdevs_discovered": 2, 00:09:37.010 "num_base_bdevs_operational": 3, 00:09:37.010 "base_bdevs_list": [ 00:09:37.010 { 00:09:37.010 "name": "BaseBdev1", 00:09:37.010 "uuid": "8d9832b6-eca5-4149-b95b-a70dab6a6b97", 00:09:37.010 "is_configured": true, 00:09:37.010 "data_offset": 0, 00:09:37.010 "data_size": 65536 00:09:37.010 }, 00:09:37.010 { 00:09:37.010 "name": null, 00:09:37.010 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:37.010 "is_configured": false, 00:09:37.010 "data_offset": 0, 00:09:37.010 "data_size": 65536 00:09:37.010 }, 00:09:37.010 { 00:09:37.010 "name": "BaseBdev3", 00:09:37.010 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:37.010 "is_configured": true, 00:09:37.010 "data_offset": 0, 00:09:37.010 "data_size": 65536 00:09:37.010 } 00:09:37.010 ] 00:09:37.010 }' 00:09:37.010 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.010 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.577 [2024-12-06 18:06:49.497938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.577 "name": "Existed_Raid", 00:09:37.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.577 "strip_size_kb": 64, 00:09:37.577 "state": "configuring", 00:09:37.577 "raid_level": "concat", 00:09:37.577 "superblock": false, 00:09:37.577 "num_base_bdevs": 3, 00:09:37.577 "num_base_bdevs_discovered": 1, 00:09:37.577 "num_base_bdevs_operational": 3, 00:09:37.577 "base_bdevs_list": [ 00:09:37.577 { 00:09:37.577 "name": "BaseBdev1", 00:09:37.577 "uuid": "8d9832b6-eca5-4149-b95b-a70dab6a6b97", 00:09:37.577 "is_configured": true, 00:09:37.577 "data_offset": 0, 00:09:37.577 "data_size": 65536 00:09:37.577 }, 00:09:37.577 { 00:09:37.577 "name": null, 00:09:37.577 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:37.577 "is_configured": false, 00:09:37.577 "data_offset": 0, 00:09:37.577 "data_size": 65536 00:09:37.577 }, 00:09:37.577 { 00:09:37.577 "name": null, 00:09:37.577 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:37.577 "is_configured": false, 00:09:37.577 "data_offset": 0, 00:09:37.577 "data_size": 65536 00:09:37.577 } 00:09:37.577 ] 00:09:37.577 }' 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.577 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.835 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.835 18:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.835 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.835 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.835 18:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.094 [2024-12-06 18:06:50.025107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.094 "name": "Existed_Raid", 00:09:38.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.094 "strip_size_kb": 64, 00:09:38.094 "state": "configuring", 00:09:38.094 "raid_level": "concat", 00:09:38.094 "superblock": false, 00:09:38.094 "num_base_bdevs": 3, 00:09:38.094 "num_base_bdevs_discovered": 2, 00:09:38.094 "num_base_bdevs_operational": 3, 00:09:38.094 "base_bdevs_list": [ 00:09:38.094 { 00:09:38.094 "name": "BaseBdev1", 00:09:38.094 "uuid": "8d9832b6-eca5-4149-b95b-a70dab6a6b97", 00:09:38.094 "is_configured": true, 00:09:38.094 "data_offset": 0, 00:09:38.094 "data_size": 65536 00:09:38.094 }, 00:09:38.094 { 00:09:38.094 "name": null, 00:09:38.094 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:38.094 "is_configured": false, 00:09:38.094 "data_offset": 0, 00:09:38.094 "data_size": 65536 00:09:38.094 }, 00:09:38.094 { 00:09:38.094 "name": "BaseBdev3", 00:09:38.094 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:38.094 "is_configured": true, 00:09:38.094 "data_offset": 0, 00:09:38.094 "data_size": 65536 00:09:38.094 } 00:09:38.094 ] 00:09:38.094 }' 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.094 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.354 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.354 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.354 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.354 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.354 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.354 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:38.354 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.354 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.354 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.354 [2024-12-06 18:06:50.492331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.612 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.612 "name": "Existed_Raid", 00:09:38.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.612 "strip_size_kb": 64, 00:09:38.612 "state": "configuring", 00:09:38.612 "raid_level": "concat", 00:09:38.612 "superblock": false, 00:09:38.612 "num_base_bdevs": 3, 00:09:38.612 "num_base_bdevs_discovered": 1, 00:09:38.612 "num_base_bdevs_operational": 3, 00:09:38.612 "base_bdevs_list": [ 00:09:38.612 { 00:09:38.612 "name": null, 00:09:38.612 "uuid": "8d9832b6-eca5-4149-b95b-a70dab6a6b97", 00:09:38.612 "is_configured": false, 00:09:38.612 "data_offset": 0, 00:09:38.612 "data_size": 65536 00:09:38.612 }, 00:09:38.612 { 00:09:38.612 "name": null, 00:09:38.612 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:38.612 "is_configured": false, 00:09:38.612 "data_offset": 0, 00:09:38.612 "data_size": 65536 00:09:38.612 }, 00:09:38.612 { 00:09:38.612 "name": "BaseBdev3", 00:09:38.612 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:38.612 "is_configured": true, 00:09:38.612 "data_offset": 0, 00:09:38.612 "data_size": 65536 00:09:38.612 } 00:09:38.612 ] 00:09:38.613 }' 00:09:38.613 18:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.613 18:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.232 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.233 [2024-12-06 18:06:51.116242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.233 "name": "Existed_Raid", 00:09:39.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.233 "strip_size_kb": 64, 00:09:39.233 "state": "configuring", 00:09:39.233 "raid_level": "concat", 00:09:39.233 "superblock": false, 00:09:39.233 "num_base_bdevs": 3, 00:09:39.233 "num_base_bdevs_discovered": 2, 00:09:39.233 "num_base_bdevs_operational": 3, 00:09:39.233 "base_bdevs_list": [ 00:09:39.233 { 00:09:39.233 "name": null, 00:09:39.233 "uuid": "8d9832b6-eca5-4149-b95b-a70dab6a6b97", 00:09:39.233 "is_configured": false, 00:09:39.233 "data_offset": 0, 00:09:39.233 "data_size": 65536 00:09:39.233 }, 00:09:39.233 { 00:09:39.233 "name": "BaseBdev2", 00:09:39.233 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:39.233 "is_configured": true, 00:09:39.233 "data_offset": 0, 00:09:39.233 "data_size": 65536 00:09:39.233 }, 00:09:39.233 { 00:09:39.233 "name": "BaseBdev3", 00:09:39.233 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:39.233 "is_configured": true, 00:09:39.233 "data_offset": 0, 00:09:39.233 "data_size": 65536 00:09:39.233 } 00:09:39.233 ] 00:09:39.233 }' 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.233 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.507 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.507 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.507 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.507 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.507 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.507 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:39.507 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d9832b6-eca5-4149-b95b-a70dab6a6b97 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.766 [2024-12-06 18:06:51.762580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:39.766 [2024-12-06 18:06:51.762743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:39.766 [2024-12-06 18:06:51.762789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:39.766 [2024-12-06 18:06:51.763156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:39.766 [2024-12-06 18:06:51.763411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:39.766 [2024-12-06 18:06:51.763467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:39.766 [2024-12-06 18:06:51.763856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.766 NewBaseBdev 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.766 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.767 [ 00:09:39.767 { 00:09:39.767 "name": "NewBaseBdev", 00:09:39.767 "aliases": [ 00:09:39.767 "8d9832b6-eca5-4149-b95b-a70dab6a6b97" 00:09:39.767 ], 00:09:39.767 "product_name": "Malloc disk", 00:09:39.767 "block_size": 512, 00:09:39.767 "num_blocks": 65536, 00:09:39.767 "uuid": "8d9832b6-eca5-4149-b95b-a70dab6a6b97", 00:09:39.767 "assigned_rate_limits": { 00:09:39.767 "rw_ios_per_sec": 0, 00:09:39.767 "rw_mbytes_per_sec": 0, 00:09:39.767 "r_mbytes_per_sec": 0, 00:09:39.767 "w_mbytes_per_sec": 0 00:09:39.767 }, 00:09:39.767 "claimed": true, 00:09:39.767 "claim_type": "exclusive_write", 00:09:39.767 "zoned": false, 00:09:39.767 "supported_io_types": { 00:09:39.767 "read": true, 00:09:39.767 "write": true, 00:09:39.767 "unmap": true, 00:09:39.767 "flush": true, 00:09:39.767 "reset": true, 00:09:39.767 "nvme_admin": false, 00:09:39.767 "nvme_io": false, 00:09:39.767 "nvme_io_md": false, 00:09:39.767 "write_zeroes": true, 00:09:39.767 "zcopy": true, 00:09:39.767 "get_zone_info": false, 00:09:39.767 "zone_management": false, 00:09:39.767 "zone_append": false, 00:09:39.767 "compare": false, 00:09:39.767 "compare_and_write": false, 00:09:39.767 "abort": true, 00:09:39.767 "seek_hole": false, 00:09:39.767 "seek_data": false, 00:09:39.767 "copy": true, 00:09:39.767 "nvme_iov_md": false 00:09:39.767 }, 00:09:39.767 "memory_domains": [ 00:09:39.767 { 00:09:39.767 "dma_device_id": "system", 00:09:39.767 "dma_device_type": 1 00:09:39.767 }, 00:09:39.767 { 00:09:39.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.767 "dma_device_type": 2 00:09:39.767 } 00:09:39.767 ], 00:09:39.767 "driver_specific": {} 00:09:39.767 } 00:09:39.767 ] 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.767 "name": "Existed_Raid", 00:09:39.767 "uuid": "52c72bf3-0353-4b16-bcc9-db1624050a73", 00:09:39.767 "strip_size_kb": 64, 00:09:39.767 "state": "online", 00:09:39.767 "raid_level": "concat", 00:09:39.767 "superblock": false, 00:09:39.767 "num_base_bdevs": 3, 00:09:39.767 "num_base_bdevs_discovered": 3, 00:09:39.767 "num_base_bdevs_operational": 3, 00:09:39.767 "base_bdevs_list": [ 00:09:39.767 { 00:09:39.767 "name": "NewBaseBdev", 00:09:39.767 "uuid": "8d9832b6-eca5-4149-b95b-a70dab6a6b97", 00:09:39.767 "is_configured": true, 00:09:39.767 "data_offset": 0, 00:09:39.767 "data_size": 65536 00:09:39.767 }, 00:09:39.767 { 00:09:39.767 "name": "BaseBdev2", 00:09:39.767 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:39.767 "is_configured": true, 00:09:39.767 "data_offset": 0, 00:09:39.767 "data_size": 65536 00:09:39.767 }, 00:09:39.767 { 00:09:39.767 "name": "BaseBdev3", 00:09:39.767 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:39.767 "is_configured": true, 00:09:39.767 "data_offset": 0, 00:09:39.767 "data_size": 65536 00:09:39.767 } 00:09:39.767 ] 00:09:39.767 }' 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.767 18:06:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.336 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.336 [2024-12-06 18:06:52.278188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.337 "name": "Existed_Raid", 00:09:40.337 "aliases": [ 00:09:40.337 "52c72bf3-0353-4b16-bcc9-db1624050a73" 00:09:40.337 ], 00:09:40.337 "product_name": "Raid Volume", 00:09:40.337 "block_size": 512, 00:09:40.337 "num_blocks": 196608, 00:09:40.337 "uuid": "52c72bf3-0353-4b16-bcc9-db1624050a73", 00:09:40.337 "assigned_rate_limits": { 00:09:40.337 "rw_ios_per_sec": 0, 00:09:40.337 "rw_mbytes_per_sec": 0, 00:09:40.337 "r_mbytes_per_sec": 0, 00:09:40.337 "w_mbytes_per_sec": 0 00:09:40.337 }, 00:09:40.337 "claimed": false, 00:09:40.337 "zoned": false, 00:09:40.337 "supported_io_types": { 00:09:40.337 "read": true, 00:09:40.337 "write": true, 00:09:40.337 "unmap": true, 00:09:40.337 "flush": true, 00:09:40.337 "reset": true, 00:09:40.337 "nvme_admin": false, 00:09:40.337 "nvme_io": false, 00:09:40.337 "nvme_io_md": false, 00:09:40.337 "write_zeroes": true, 00:09:40.337 "zcopy": false, 00:09:40.337 "get_zone_info": false, 00:09:40.337 "zone_management": false, 00:09:40.337 "zone_append": false, 00:09:40.337 "compare": false, 00:09:40.337 "compare_and_write": false, 00:09:40.337 "abort": false, 00:09:40.337 "seek_hole": false, 00:09:40.337 "seek_data": false, 00:09:40.337 "copy": false, 00:09:40.337 "nvme_iov_md": false 00:09:40.337 }, 00:09:40.337 "memory_domains": [ 00:09:40.337 { 00:09:40.337 "dma_device_id": "system", 00:09:40.337 "dma_device_type": 1 00:09:40.337 }, 00:09:40.337 { 00:09:40.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.337 "dma_device_type": 2 00:09:40.337 }, 00:09:40.337 { 00:09:40.337 "dma_device_id": "system", 00:09:40.337 "dma_device_type": 1 00:09:40.337 }, 00:09:40.337 { 00:09:40.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.337 "dma_device_type": 2 00:09:40.337 }, 00:09:40.337 { 00:09:40.337 "dma_device_id": "system", 00:09:40.337 "dma_device_type": 1 00:09:40.337 }, 00:09:40.337 { 00:09:40.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.337 "dma_device_type": 2 00:09:40.337 } 00:09:40.337 ], 00:09:40.337 "driver_specific": { 00:09:40.337 "raid": { 00:09:40.337 "uuid": "52c72bf3-0353-4b16-bcc9-db1624050a73", 00:09:40.337 "strip_size_kb": 64, 00:09:40.337 "state": "online", 00:09:40.337 "raid_level": "concat", 00:09:40.337 "superblock": false, 00:09:40.337 "num_base_bdevs": 3, 00:09:40.337 "num_base_bdevs_discovered": 3, 00:09:40.337 "num_base_bdevs_operational": 3, 00:09:40.337 "base_bdevs_list": [ 00:09:40.337 { 00:09:40.337 "name": "NewBaseBdev", 00:09:40.337 "uuid": "8d9832b6-eca5-4149-b95b-a70dab6a6b97", 00:09:40.337 "is_configured": true, 00:09:40.337 "data_offset": 0, 00:09:40.337 "data_size": 65536 00:09:40.337 }, 00:09:40.337 { 00:09:40.337 "name": "BaseBdev2", 00:09:40.337 "uuid": "f04cdb5d-ac24-407d-bb6d-e4a23b193a9f", 00:09:40.337 "is_configured": true, 00:09:40.337 "data_offset": 0, 00:09:40.337 "data_size": 65536 00:09:40.337 }, 00:09:40.337 { 00:09:40.337 "name": "BaseBdev3", 00:09:40.337 "uuid": "acfd4125-54f5-44fb-a054-91f3650254f6", 00:09:40.337 "is_configured": true, 00:09:40.337 "data_offset": 0, 00:09:40.337 "data_size": 65536 00:09:40.337 } 00:09:40.337 ] 00:09:40.337 } 00:09:40.337 } 00:09:40.337 }' 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:40.337 BaseBdev2 00:09:40.337 BaseBdev3' 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.337 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.338 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.338 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.338 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.338 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.338 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.598 [2024-12-06 18:06:52.565326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.598 [2024-12-06 18:06:52.565359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.598 [2024-12-06 18:06:52.565456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.598 [2024-12-06 18:06:52.565519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.598 [2024-12-06 18:06:52.565532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66037 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 66037 ']' 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 66037 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66037 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66037' 00:09:40.598 killing process with pid 66037 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 66037 00:09:40.598 [2024-12-06 18:06:52.614641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.598 18:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 66037 00:09:40.858 [2024-12-06 18:06:52.949587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.238 00:09:42.238 real 0m11.145s 00:09:42.238 user 0m17.693s 00:09:42.238 sys 0m1.956s 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.238 ************************************ 00:09:42.238 END TEST raid_state_function_test 00:09:42.238 ************************************ 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.238 18:06:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:42.238 18:06:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:42.238 18:06:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.238 18:06:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.238 ************************************ 00:09:42.238 START TEST raid_state_function_test_sb 00:09:42.238 ************************************ 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66664 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66664' 00:09:42.238 Process raid pid: 66664 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66664 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66664 ']' 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.238 18:06:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.238 [2024-12-06 18:06:54.297482] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:09:42.238 [2024-12-06 18:06:54.297602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.498 [2024-12-06 18:06:54.473213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.499 [2024-12-06 18:06:54.587856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.758 [2024-12-06 18:06:54.806886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.758 [2024-12-06 18:06:54.806935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.018 [2024-12-06 18:06:55.162854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.018 [2024-12-06 18:06:55.162920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.018 [2024-12-06 18:06:55.162933] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.018 [2024-12-06 18:06:55.162944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.018 [2024-12-06 18:06:55.162951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.018 [2024-12-06 18:06:55.162961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.018 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.019 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.019 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.019 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.019 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.019 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.019 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.019 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.019 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.019 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.295 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.295 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.295 "name": "Existed_Raid", 00:09:43.295 "uuid": "57d1bd60-5ce0-4b42-8259-93204f67c966", 00:09:43.295 "strip_size_kb": 64, 00:09:43.295 "state": "configuring", 00:09:43.295 "raid_level": "concat", 00:09:43.295 "superblock": true, 00:09:43.295 "num_base_bdevs": 3, 00:09:43.295 "num_base_bdevs_discovered": 0, 00:09:43.295 "num_base_bdevs_operational": 3, 00:09:43.295 "base_bdevs_list": [ 00:09:43.295 { 00:09:43.295 "name": "BaseBdev1", 00:09:43.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.295 "is_configured": false, 00:09:43.295 "data_offset": 0, 00:09:43.295 "data_size": 0 00:09:43.295 }, 00:09:43.295 { 00:09:43.295 "name": "BaseBdev2", 00:09:43.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.295 "is_configured": false, 00:09:43.295 "data_offset": 0, 00:09:43.295 "data_size": 0 00:09:43.295 }, 00:09:43.295 { 00:09:43.295 "name": "BaseBdev3", 00:09:43.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.295 "is_configured": false, 00:09:43.295 "data_offset": 0, 00:09:43.295 "data_size": 0 00:09:43.295 } 00:09:43.295 ] 00:09:43.295 }' 00:09:43.295 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.295 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.557 [2024-12-06 18:06:55.638020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.557 [2024-12-06 18:06:55.638144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.557 [2024-12-06 18:06:55.649974] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.557 [2024-12-06 18:06:55.650069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.557 [2024-12-06 18:06:55.650103] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.557 [2024-12-06 18:06:55.650127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.557 [2024-12-06 18:06:55.650146] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.557 [2024-12-06 18:06:55.650168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.557 [2024-12-06 18:06:55.699692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.557 BaseBdev1 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.557 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.817 [ 00:09:43.817 { 00:09:43.817 "name": "BaseBdev1", 00:09:43.817 "aliases": [ 00:09:43.817 "f6e3b7a7-20fe-4258-85a8-7e9d3c298052" 00:09:43.817 ], 00:09:43.817 "product_name": "Malloc disk", 00:09:43.817 "block_size": 512, 00:09:43.817 "num_blocks": 65536, 00:09:43.817 "uuid": "f6e3b7a7-20fe-4258-85a8-7e9d3c298052", 00:09:43.817 "assigned_rate_limits": { 00:09:43.817 "rw_ios_per_sec": 0, 00:09:43.817 "rw_mbytes_per_sec": 0, 00:09:43.817 "r_mbytes_per_sec": 0, 00:09:43.817 "w_mbytes_per_sec": 0 00:09:43.817 }, 00:09:43.817 "claimed": true, 00:09:43.817 "claim_type": "exclusive_write", 00:09:43.817 "zoned": false, 00:09:43.817 "supported_io_types": { 00:09:43.817 "read": true, 00:09:43.817 "write": true, 00:09:43.817 "unmap": true, 00:09:43.817 "flush": true, 00:09:43.817 "reset": true, 00:09:43.817 "nvme_admin": false, 00:09:43.817 "nvme_io": false, 00:09:43.817 "nvme_io_md": false, 00:09:43.817 "write_zeroes": true, 00:09:43.817 "zcopy": true, 00:09:43.817 "get_zone_info": false, 00:09:43.817 "zone_management": false, 00:09:43.817 "zone_append": false, 00:09:43.817 "compare": false, 00:09:43.817 "compare_and_write": false, 00:09:43.817 "abort": true, 00:09:43.817 "seek_hole": false, 00:09:43.817 "seek_data": false, 00:09:43.817 "copy": true, 00:09:43.817 "nvme_iov_md": false 00:09:43.817 }, 00:09:43.817 "memory_domains": [ 00:09:43.817 { 00:09:43.817 "dma_device_id": "system", 00:09:43.817 "dma_device_type": 1 00:09:43.817 }, 00:09:43.817 { 00:09:43.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.817 "dma_device_type": 2 00:09:43.817 } 00:09:43.817 ], 00:09:43.817 "driver_specific": {} 00:09:43.817 } 00:09:43.817 ] 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.817 "name": "Existed_Raid", 00:09:43.817 "uuid": "6d03d6c2-a488-49f6-8875-795d7bace028", 00:09:43.817 "strip_size_kb": 64, 00:09:43.817 "state": "configuring", 00:09:43.817 "raid_level": "concat", 00:09:43.817 "superblock": true, 00:09:43.817 "num_base_bdevs": 3, 00:09:43.817 "num_base_bdevs_discovered": 1, 00:09:43.817 "num_base_bdevs_operational": 3, 00:09:43.817 "base_bdevs_list": [ 00:09:43.817 { 00:09:43.817 "name": "BaseBdev1", 00:09:43.817 "uuid": "f6e3b7a7-20fe-4258-85a8-7e9d3c298052", 00:09:43.817 "is_configured": true, 00:09:43.817 "data_offset": 2048, 00:09:43.817 "data_size": 63488 00:09:43.817 }, 00:09:43.817 { 00:09:43.817 "name": "BaseBdev2", 00:09:43.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.817 "is_configured": false, 00:09:43.817 "data_offset": 0, 00:09:43.817 "data_size": 0 00:09:43.817 }, 00:09:43.817 { 00:09:43.817 "name": "BaseBdev3", 00:09:43.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.817 "is_configured": false, 00:09:43.817 "data_offset": 0, 00:09:43.817 "data_size": 0 00:09:43.817 } 00:09:43.817 ] 00:09:43.817 }' 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.817 18:06:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.077 [2024-12-06 18:06:56.170987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.077 [2024-12-06 18:06:56.171046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.077 [2024-12-06 18:06:56.183077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.077 [2024-12-06 18:06:56.185168] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.077 [2024-12-06 18:06:56.185214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.077 [2024-12-06 18:06:56.185227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.077 [2024-12-06 18:06:56.185237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.077 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.077 "name": "Existed_Raid", 00:09:44.077 "uuid": "183d293b-d913-4c1a-8fc5-eb50680bf271", 00:09:44.077 "strip_size_kb": 64, 00:09:44.077 "state": "configuring", 00:09:44.077 "raid_level": "concat", 00:09:44.077 "superblock": true, 00:09:44.077 "num_base_bdevs": 3, 00:09:44.078 "num_base_bdevs_discovered": 1, 00:09:44.078 "num_base_bdevs_operational": 3, 00:09:44.078 "base_bdevs_list": [ 00:09:44.078 { 00:09:44.078 "name": "BaseBdev1", 00:09:44.078 "uuid": "f6e3b7a7-20fe-4258-85a8-7e9d3c298052", 00:09:44.078 "is_configured": true, 00:09:44.078 "data_offset": 2048, 00:09:44.078 "data_size": 63488 00:09:44.078 }, 00:09:44.078 { 00:09:44.078 "name": "BaseBdev2", 00:09:44.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.078 "is_configured": false, 00:09:44.078 "data_offset": 0, 00:09:44.078 "data_size": 0 00:09:44.078 }, 00:09:44.078 { 00:09:44.078 "name": "BaseBdev3", 00:09:44.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.078 "is_configured": false, 00:09:44.078 "data_offset": 0, 00:09:44.078 "data_size": 0 00:09:44.078 } 00:09:44.078 ] 00:09:44.078 }' 00:09:44.078 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.078 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.647 [2024-12-06 18:06:56.676311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.647 BaseBdev2 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.647 [ 00:09:44.647 { 00:09:44.647 "name": "BaseBdev2", 00:09:44.647 "aliases": [ 00:09:44.647 "ce0b26f5-e039-4c74-8b4a-53095ebdf41e" 00:09:44.647 ], 00:09:44.647 "product_name": "Malloc disk", 00:09:44.647 "block_size": 512, 00:09:44.647 "num_blocks": 65536, 00:09:44.647 "uuid": "ce0b26f5-e039-4c74-8b4a-53095ebdf41e", 00:09:44.647 "assigned_rate_limits": { 00:09:44.647 "rw_ios_per_sec": 0, 00:09:44.647 "rw_mbytes_per_sec": 0, 00:09:44.647 "r_mbytes_per_sec": 0, 00:09:44.647 "w_mbytes_per_sec": 0 00:09:44.647 }, 00:09:44.647 "claimed": true, 00:09:44.647 "claim_type": "exclusive_write", 00:09:44.647 "zoned": false, 00:09:44.647 "supported_io_types": { 00:09:44.647 "read": true, 00:09:44.647 "write": true, 00:09:44.647 "unmap": true, 00:09:44.647 "flush": true, 00:09:44.647 "reset": true, 00:09:44.647 "nvme_admin": false, 00:09:44.647 "nvme_io": false, 00:09:44.647 "nvme_io_md": false, 00:09:44.647 "write_zeroes": true, 00:09:44.647 "zcopy": true, 00:09:44.647 "get_zone_info": false, 00:09:44.647 "zone_management": false, 00:09:44.647 "zone_append": false, 00:09:44.647 "compare": false, 00:09:44.647 "compare_and_write": false, 00:09:44.647 "abort": true, 00:09:44.647 "seek_hole": false, 00:09:44.647 "seek_data": false, 00:09:44.647 "copy": true, 00:09:44.647 "nvme_iov_md": false 00:09:44.647 }, 00:09:44.647 "memory_domains": [ 00:09:44.647 { 00:09:44.647 "dma_device_id": "system", 00:09:44.647 "dma_device_type": 1 00:09:44.647 }, 00:09:44.647 { 00:09:44.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.647 "dma_device_type": 2 00:09:44.647 } 00:09:44.647 ], 00:09:44.647 "driver_specific": {} 00:09:44.647 } 00:09:44.647 ] 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.647 "name": "Existed_Raid", 00:09:44.647 "uuid": "183d293b-d913-4c1a-8fc5-eb50680bf271", 00:09:44.647 "strip_size_kb": 64, 00:09:44.647 "state": "configuring", 00:09:44.647 "raid_level": "concat", 00:09:44.647 "superblock": true, 00:09:44.647 "num_base_bdevs": 3, 00:09:44.647 "num_base_bdevs_discovered": 2, 00:09:44.647 "num_base_bdevs_operational": 3, 00:09:44.647 "base_bdevs_list": [ 00:09:44.647 { 00:09:44.647 "name": "BaseBdev1", 00:09:44.647 "uuid": "f6e3b7a7-20fe-4258-85a8-7e9d3c298052", 00:09:44.647 "is_configured": true, 00:09:44.647 "data_offset": 2048, 00:09:44.647 "data_size": 63488 00:09:44.647 }, 00:09:44.647 { 00:09:44.647 "name": "BaseBdev2", 00:09:44.647 "uuid": "ce0b26f5-e039-4c74-8b4a-53095ebdf41e", 00:09:44.647 "is_configured": true, 00:09:44.647 "data_offset": 2048, 00:09:44.647 "data_size": 63488 00:09:44.647 }, 00:09:44.647 { 00:09:44.647 "name": "BaseBdev3", 00:09:44.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.647 "is_configured": false, 00:09:44.647 "data_offset": 0, 00:09:44.647 "data_size": 0 00:09:44.647 } 00:09:44.647 ] 00:09:44.647 }' 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.647 18:06:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.216 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.216 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.216 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.216 [2024-12-06 18:06:57.180111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.216 [2024-12-06 18:06:57.180488] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.216 [2024-12-06 18:06:57.180550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.216 [2024-12-06 18:06:57.180842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:45.216 [2024-12-06 18:06:57.181054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.216 [2024-12-06 18:06:57.181113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:45.216 [2024-12-06 18:06:57.181322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.216 BaseBdev3 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.217 [ 00:09:45.217 { 00:09:45.217 "name": "BaseBdev3", 00:09:45.217 "aliases": [ 00:09:45.217 "96ba06a6-0713-49e8-a512-ac289edd4bf7" 00:09:45.217 ], 00:09:45.217 "product_name": "Malloc disk", 00:09:45.217 "block_size": 512, 00:09:45.217 "num_blocks": 65536, 00:09:45.217 "uuid": "96ba06a6-0713-49e8-a512-ac289edd4bf7", 00:09:45.217 "assigned_rate_limits": { 00:09:45.217 "rw_ios_per_sec": 0, 00:09:45.217 "rw_mbytes_per_sec": 0, 00:09:45.217 "r_mbytes_per_sec": 0, 00:09:45.217 "w_mbytes_per_sec": 0 00:09:45.217 }, 00:09:45.217 "claimed": true, 00:09:45.217 "claim_type": "exclusive_write", 00:09:45.217 "zoned": false, 00:09:45.217 "supported_io_types": { 00:09:45.217 "read": true, 00:09:45.217 "write": true, 00:09:45.217 "unmap": true, 00:09:45.217 "flush": true, 00:09:45.217 "reset": true, 00:09:45.217 "nvme_admin": false, 00:09:45.217 "nvme_io": false, 00:09:45.217 "nvme_io_md": false, 00:09:45.217 "write_zeroes": true, 00:09:45.217 "zcopy": true, 00:09:45.217 "get_zone_info": false, 00:09:45.217 "zone_management": false, 00:09:45.217 "zone_append": false, 00:09:45.217 "compare": false, 00:09:45.217 "compare_and_write": false, 00:09:45.217 "abort": true, 00:09:45.217 "seek_hole": false, 00:09:45.217 "seek_data": false, 00:09:45.217 "copy": true, 00:09:45.217 "nvme_iov_md": false 00:09:45.217 }, 00:09:45.217 "memory_domains": [ 00:09:45.217 { 00:09:45.217 "dma_device_id": "system", 00:09:45.217 "dma_device_type": 1 00:09:45.217 }, 00:09:45.217 { 00:09:45.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.217 "dma_device_type": 2 00:09:45.217 } 00:09:45.217 ], 00:09:45.217 "driver_specific": {} 00:09:45.217 } 00:09:45.217 ] 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.217 "name": "Existed_Raid", 00:09:45.217 "uuid": "183d293b-d913-4c1a-8fc5-eb50680bf271", 00:09:45.217 "strip_size_kb": 64, 00:09:45.217 "state": "online", 00:09:45.217 "raid_level": "concat", 00:09:45.217 "superblock": true, 00:09:45.217 "num_base_bdevs": 3, 00:09:45.217 "num_base_bdevs_discovered": 3, 00:09:45.217 "num_base_bdevs_operational": 3, 00:09:45.217 "base_bdevs_list": [ 00:09:45.217 { 00:09:45.217 "name": "BaseBdev1", 00:09:45.217 "uuid": "f6e3b7a7-20fe-4258-85a8-7e9d3c298052", 00:09:45.217 "is_configured": true, 00:09:45.217 "data_offset": 2048, 00:09:45.217 "data_size": 63488 00:09:45.217 }, 00:09:45.217 { 00:09:45.217 "name": "BaseBdev2", 00:09:45.217 "uuid": "ce0b26f5-e039-4c74-8b4a-53095ebdf41e", 00:09:45.217 "is_configured": true, 00:09:45.217 "data_offset": 2048, 00:09:45.217 "data_size": 63488 00:09:45.217 }, 00:09:45.217 { 00:09:45.217 "name": "BaseBdev3", 00:09:45.217 "uuid": "96ba06a6-0713-49e8-a512-ac289edd4bf7", 00:09:45.217 "is_configured": true, 00:09:45.217 "data_offset": 2048, 00:09:45.217 "data_size": 63488 00:09:45.217 } 00:09:45.217 ] 00:09:45.217 }' 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.217 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.785 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:45.785 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.786 [2024-12-06 18:06:57.695651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.786 "name": "Existed_Raid", 00:09:45.786 "aliases": [ 00:09:45.786 "183d293b-d913-4c1a-8fc5-eb50680bf271" 00:09:45.786 ], 00:09:45.786 "product_name": "Raid Volume", 00:09:45.786 "block_size": 512, 00:09:45.786 "num_blocks": 190464, 00:09:45.786 "uuid": "183d293b-d913-4c1a-8fc5-eb50680bf271", 00:09:45.786 "assigned_rate_limits": { 00:09:45.786 "rw_ios_per_sec": 0, 00:09:45.786 "rw_mbytes_per_sec": 0, 00:09:45.786 "r_mbytes_per_sec": 0, 00:09:45.786 "w_mbytes_per_sec": 0 00:09:45.786 }, 00:09:45.786 "claimed": false, 00:09:45.786 "zoned": false, 00:09:45.786 "supported_io_types": { 00:09:45.786 "read": true, 00:09:45.786 "write": true, 00:09:45.786 "unmap": true, 00:09:45.786 "flush": true, 00:09:45.786 "reset": true, 00:09:45.786 "nvme_admin": false, 00:09:45.786 "nvme_io": false, 00:09:45.786 "nvme_io_md": false, 00:09:45.786 "write_zeroes": true, 00:09:45.786 "zcopy": false, 00:09:45.786 "get_zone_info": false, 00:09:45.786 "zone_management": false, 00:09:45.786 "zone_append": false, 00:09:45.786 "compare": false, 00:09:45.786 "compare_and_write": false, 00:09:45.786 "abort": false, 00:09:45.786 "seek_hole": false, 00:09:45.786 "seek_data": false, 00:09:45.786 "copy": false, 00:09:45.786 "nvme_iov_md": false 00:09:45.786 }, 00:09:45.786 "memory_domains": [ 00:09:45.786 { 00:09:45.786 "dma_device_id": "system", 00:09:45.786 "dma_device_type": 1 00:09:45.786 }, 00:09:45.786 { 00:09:45.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.786 "dma_device_type": 2 00:09:45.786 }, 00:09:45.786 { 00:09:45.786 "dma_device_id": "system", 00:09:45.786 "dma_device_type": 1 00:09:45.786 }, 00:09:45.786 { 00:09:45.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.786 "dma_device_type": 2 00:09:45.786 }, 00:09:45.786 { 00:09:45.786 "dma_device_id": "system", 00:09:45.786 "dma_device_type": 1 00:09:45.786 }, 00:09:45.786 { 00:09:45.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.786 "dma_device_type": 2 00:09:45.786 } 00:09:45.786 ], 00:09:45.786 "driver_specific": { 00:09:45.786 "raid": { 00:09:45.786 "uuid": "183d293b-d913-4c1a-8fc5-eb50680bf271", 00:09:45.786 "strip_size_kb": 64, 00:09:45.786 "state": "online", 00:09:45.786 "raid_level": "concat", 00:09:45.786 "superblock": true, 00:09:45.786 "num_base_bdevs": 3, 00:09:45.786 "num_base_bdevs_discovered": 3, 00:09:45.786 "num_base_bdevs_operational": 3, 00:09:45.786 "base_bdevs_list": [ 00:09:45.786 { 00:09:45.786 "name": "BaseBdev1", 00:09:45.786 "uuid": "f6e3b7a7-20fe-4258-85a8-7e9d3c298052", 00:09:45.786 "is_configured": true, 00:09:45.786 "data_offset": 2048, 00:09:45.786 "data_size": 63488 00:09:45.786 }, 00:09:45.786 { 00:09:45.786 "name": "BaseBdev2", 00:09:45.786 "uuid": "ce0b26f5-e039-4c74-8b4a-53095ebdf41e", 00:09:45.786 "is_configured": true, 00:09:45.786 "data_offset": 2048, 00:09:45.786 "data_size": 63488 00:09:45.786 }, 00:09:45.786 { 00:09:45.786 "name": "BaseBdev3", 00:09:45.786 "uuid": "96ba06a6-0713-49e8-a512-ac289edd4bf7", 00:09:45.786 "is_configured": true, 00:09:45.786 "data_offset": 2048, 00:09:45.786 "data_size": 63488 00:09:45.786 } 00:09:45.786 ] 00:09:45.786 } 00:09:45.786 } 00:09:45.786 }' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:45.786 BaseBdev2 00:09:45.786 BaseBdev3' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.786 18:06:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.786 [2024-12-06 18:06:57.950936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.786 [2024-12-06 18:06:57.951024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.786 [2024-12-06 18:06:57.951129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.045 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.046 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.046 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.046 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.046 "name": "Existed_Raid", 00:09:46.046 "uuid": "183d293b-d913-4c1a-8fc5-eb50680bf271", 00:09:46.046 "strip_size_kb": 64, 00:09:46.046 "state": "offline", 00:09:46.046 "raid_level": "concat", 00:09:46.046 "superblock": true, 00:09:46.046 "num_base_bdevs": 3, 00:09:46.046 "num_base_bdevs_discovered": 2, 00:09:46.046 "num_base_bdevs_operational": 2, 00:09:46.046 "base_bdevs_list": [ 00:09:46.046 { 00:09:46.046 "name": null, 00:09:46.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.046 "is_configured": false, 00:09:46.046 "data_offset": 0, 00:09:46.046 "data_size": 63488 00:09:46.046 }, 00:09:46.046 { 00:09:46.046 "name": "BaseBdev2", 00:09:46.046 "uuid": "ce0b26f5-e039-4c74-8b4a-53095ebdf41e", 00:09:46.046 "is_configured": true, 00:09:46.046 "data_offset": 2048, 00:09:46.046 "data_size": 63488 00:09:46.046 }, 00:09:46.046 { 00:09:46.046 "name": "BaseBdev3", 00:09:46.046 "uuid": "96ba06a6-0713-49e8-a512-ac289edd4bf7", 00:09:46.046 "is_configured": true, 00:09:46.046 "data_offset": 2048, 00:09:46.046 "data_size": 63488 00:09:46.046 } 00:09:46.046 ] 00:09:46.046 }' 00:09:46.046 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.046 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.616 [2024-12-06 18:06:58.569960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.616 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.616 [2024-12-06 18:06:58.723981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:46.616 [2024-12-06 18:06:58.724108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.876 BaseBdev2 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:46.876 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.877 [ 00:09:46.877 { 00:09:46.877 "name": "BaseBdev2", 00:09:46.877 "aliases": [ 00:09:46.877 "9f84b033-43e3-4397-9409-f12490a4fda2" 00:09:46.877 ], 00:09:46.877 "product_name": "Malloc disk", 00:09:46.877 "block_size": 512, 00:09:46.877 "num_blocks": 65536, 00:09:46.877 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:46.877 "assigned_rate_limits": { 00:09:46.877 "rw_ios_per_sec": 0, 00:09:46.877 "rw_mbytes_per_sec": 0, 00:09:46.877 "r_mbytes_per_sec": 0, 00:09:46.877 "w_mbytes_per_sec": 0 00:09:46.877 }, 00:09:46.877 "claimed": false, 00:09:46.877 "zoned": false, 00:09:46.877 "supported_io_types": { 00:09:46.877 "read": true, 00:09:46.877 "write": true, 00:09:46.877 "unmap": true, 00:09:46.877 "flush": true, 00:09:46.877 "reset": true, 00:09:46.877 "nvme_admin": false, 00:09:46.877 "nvme_io": false, 00:09:46.877 "nvme_io_md": false, 00:09:46.877 "write_zeroes": true, 00:09:46.877 "zcopy": true, 00:09:46.877 "get_zone_info": false, 00:09:46.877 "zone_management": false, 00:09:46.877 "zone_append": false, 00:09:46.877 "compare": false, 00:09:46.877 "compare_and_write": false, 00:09:46.877 "abort": true, 00:09:46.877 "seek_hole": false, 00:09:46.877 "seek_data": false, 00:09:46.877 "copy": true, 00:09:46.877 "nvme_iov_md": false 00:09:46.877 }, 00:09:46.877 "memory_domains": [ 00:09:46.877 { 00:09:46.877 "dma_device_id": "system", 00:09:46.877 "dma_device_type": 1 00:09:46.877 }, 00:09:46.877 { 00:09:46.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.877 "dma_device_type": 2 00:09:46.877 } 00:09:46.877 ], 00:09:46.877 "driver_specific": {} 00:09:46.877 } 00:09:46.877 ] 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.877 BaseBdev3 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.877 18:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.877 [ 00:09:46.877 { 00:09:46.877 "name": "BaseBdev3", 00:09:46.877 "aliases": [ 00:09:46.877 "56cc47d7-fe42-451a-8191-60fc057da164" 00:09:46.877 ], 00:09:46.877 "product_name": "Malloc disk", 00:09:46.877 "block_size": 512, 00:09:46.877 "num_blocks": 65536, 00:09:46.877 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:46.877 "assigned_rate_limits": { 00:09:46.877 "rw_ios_per_sec": 0, 00:09:46.877 "rw_mbytes_per_sec": 0, 00:09:46.877 "r_mbytes_per_sec": 0, 00:09:46.877 "w_mbytes_per_sec": 0 00:09:46.877 }, 00:09:46.877 "claimed": false, 00:09:46.877 "zoned": false, 00:09:46.877 "supported_io_types": { 00:09:46.877 "read": true, 00:09:46.877 "write": true, 00:09:46.877 "unmap": true, 00:09:46.877 "flush": true, 00:09:46.877 "reset": true, 00:09:46.877 "nvme_admin": false, 00:09:46.877 "nvme_io": false, 00:09:46.877 "nvme_io_md": false, 00:09:46.877 "write_zeroes": true, 00:09:46.877 "zcopy": true, 00:09:46.877 "get_zone_info": false, 00:09:46.877 "zone_management": false, 00:09:46.877 "zone_append": false, 00:09:46.877 "compare": false, 00:09:46.877 "compare_and_write": false, 00:09:46.877 "abort": true, 00:09:46.877 "seek_hole": false, 00:09:46.877 "seek_data": false, 00:09:46.877 "copy": true, 00:09:46.877 "nvme_iov_md": false 00:09:46.877 }, 00:09:46.877 "memory_domains": [ 00:09:46.877 { 00:09:46.877 "dma_device_id": "system", 00:09:46.877 "dma_device_type": 1 00:09:46.877 }, 00:09:46.877 { 00:09:46.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.877 "dma_device_type": 2 00:09:46.877 } 00:09:46.877 ], 00:09:46.877 "driver_specific": {} 00:09:46.877 } 00:09:46.877 ] 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.877 [2024-12-06 18:06:59.018268] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.877 [2024-12-06 18:06:59.018361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.877 [2024-12-06 18:06:59.018416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.877 [2024-12-06 18:06:59.020539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.877 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.137 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.137 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.137 "name": "Existed_Raid", 00:09:47.137 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:47.137 "strip_size_kb": 64, 00:09:47.137 "state": "configuring", 00:09:47.137 "raid_level": "concat", 00:09:47.137 "superblock": true, 00:09:47.137 "num_base_bdevs": 3, 00:09:47.137 "num_base_bdevs_discovered": 2, 00:09:47.137 "num_base_bdevs_operational": 3, 00:09:47.137 "base_bdevs_list": [ 00:09:47.137 { 00:09:47.137 "name": "BaseBdev1", 00:09:47.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.137 "is_configured": false, 00:09:47.137 "data_offset": 0, 00:09:47.137 "data_size": 0 00:09:47.137 }, 00:09:47.137 { 00:09:47.137 "name": "BaseBdev2", 00:09:47.137 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:47.137 "is_configured": true, 00:09:47.137 "data_offset": 2048, 00:09:47.137 "data_size": 63488 00:09:47.137 }, 00:09:47.137 { 00:09:47.137 "name": "BaseBdev3", 00:09:47.137 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:47.137 "is_configured": true, 00:09:47.137 "data_offset": 2048, 00:09:47.137 "data_size": 63488 00:09:47.137 } 00:09:47.137 ] 00:09:47.137 }' 00:09:47.137 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.137 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 [2024-12-06 18:06:59.429628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.397 "name": "Existed_Raid", 00:09:47.397 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:47.397 "strip_size_kb": 64, 00:09:47.397 "state": "configuring", 00:09:47.397 "raid_level": "concat", 00:09:47.397 "superblock": true, 00:09:47.397 "num_base_bdevs": 3, 00:09:47.397 "num_base_bdevs_discovered": 1, 00:09:47.397 "num_base_bdevs_operational": 3, 00:09:47.397 "base_bdevs_list": [ 00:09:47.397 { 00:09:47.397 "name": "BaseBdev1", 00:09:47.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.397 "is_configured": false, 00:09:47.397 "data_offset": 0, 00:09:47.397 "data_size": 0 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "name": null, 00:09:47.397 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:47.397 "is_configured": false, 00:09:47.397 "data_offset": 0, 00:09:47.397 "data_size": 63488 00:09:47.397 }, 00:09:47.397 { 00:09:47.397 "name": "BaseBdev3", 00:09:47.397 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:47.397 "is_configured": true, 00:09:47.397 "data_offset": 2048, 00:09:47.397 "data_size": 63488 00:09:47.397 } 00:09:47.397 ] 00:09:47.397 }' 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.397 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.965 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:47.965 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.965 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.965 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.965 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.965 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:47.965 18:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.965 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.965 18:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.965 [2024-12-06 18:07:00.002373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.965 BaseBdev1 00:09:47.965 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.966 [ 00:09:47.966 { 00:09:47.966 "name": "BaseBdev1", 00:09:47.966 "aliases": [ 00:09:47.966 "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e" 00:09:47.966 ], 00:09:47.966 "product_name": "Malloc disk", 00:09:47.966 "block_size": 512, 00:09:47.966 "num_blocks": 65536, 00:09:47.966 "uuid": "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e", 00:09:47.966 "assigned_rate_limits": { 00:09:47.966 "rw_ios_per_sec": 0, 00:09:47.966 "rw_mbytes_per_sec": 0, 00:09:47.966 "r_mbytes_per_sec": 0, 00:09:47.966 "w_mbytes_per_sec": 0 00:09:47.966 }, 00:09:47.966 "claimed": true, 00:09:47.966 "claim_type": "exclusive_write", 00:09:47.966 "zoned": false, 00:09:47.966 "supported_io_types": { 00:09:47.966 "read": true, 00:09:47.966 "write": true, 00:09:47.966 "unmap": true, 00:09:47.966 "flush": true, 00:09:47.966 "reset": true, 00:09:47.966 "nvme_admin": false, 00:09:47.966 "nvme_io": false, 00:09:47.966 "nvme_io_md": false, 00:09:47.966 "write_zeroes": true, 00:09:47.966 "zcopy": true, 00:09:47.966 "get_zone_info": false, 00:09:47.966 "zone_management": false, 00:09:47.966 "zone_append": false, 00:09:47.966 "compare": false, 00:09:47.966 "compare_and_write": false, 00:09:47.966 "abort": true, 00:09:47.966 "seek_hole": false, 00:09:47.966 "seek_data": false, 00:09:47.966 "copy": true, 00:09:47.966 "nvme_iov_md": false 00:09:47.966 }, 00:09:47.966 "memory_domains": [ 00:09:47.966 { 00:09:47.966 "dma_device_id": "system", 00:09:47.966 "dma_device_type": 1 00:09:47.966 }, 00:09:47.966 { 00:09:47.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.966 "dma_device_type": 2 00:09:47.966 } 00:09:47.966 ], 00:09:47.966 "driver_specific": {} 00:09:47.966 } 00:09:47.966 ] 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.966 "name": "Existed_Raid", 00:09:47.966 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:47.966 "strip_size_kb": 64, 00:09:47.966 "state": "configuring", 00:09:47.966 "raid_level": "concat", 00:09:47.966 "superblock": true, 00:09:47.966 "num_base_bdevs": 3, 00:09:47.966 "num_base_bdevs_discovered": 2, 00:09:47.966 "num_base_bdevs_operational": 3, 00:09:47.966 "base_bdevs_list": [ 00:09:47.966 { 00:09:47.966 "name": "BaseBdev1", 00:09:47.966 "uuid": "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e", 00:09:47.966 "is_configured": true, 00:09:47.966 "data_offset": 2048, 00:09:47.966 "data_size": 63488 00:09:47.966 }, 00:09:47.966 { 00:09:47.966 "name": null, 00:09:47.966 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:47.966 "is_configured": false, 00:09:47.966 "data_offset": 0, 00:09:47.966 "data_size": 63488 00:09:47.966 }, 00:09:47.966 { 00:09:47.966 "name": "BaseBdev3", 00:09:47.966 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:47.966 "is_configured": true, 00:09:47.966 "data_offset": 2048, 00:09:47.966 "data_size": 63488 00:09:47.966 } 00:09:47.966 ] 00:09:47.966 }' 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.966 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.535 [2024-12-06 18:07:00.485619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.535 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.535 "name": "Existed_Raid", 00:09:48.535 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:48.535 "strip_size_kb": 64, 00:09:48.535 "state": "configuring", 00:09:48.535 "raid_level": "concat", 00:09:48.535 "superblock": true, 00:09:48.535 "num_base_bdevs": 3, 00:09:48.535 "num_base_bdevs_discovered": 1, 00:09:48.535 "num_base_bdevs_operational": 3, 00:09:48.535 "base_bdevs_list": [ 00:09:48.535 { 00:09:48.535 "name": "BaseBdev1", 00:09:48.535 "uuid": "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e", 00:09:48.535 "is_configured": true, 00:09:48.535 "data_offset": 2048, 00:09:48.535 "data_size": 63488 00:09:48.535 }, 00:09:48.535 { 00:09:48.535 "name": null, 00:09:48.535 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:48.535 "is_configured": false, 00:09:48.535 "data_offset": 0, 00:09:48.535 "data_size": 63488 00:09:48.535 }, 00:09:48.535 { 00:09:48.536 "name": null, 00:09:48.536 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:48.536 "is_configured": false, 00:09:48.536 "data_offset": 0, 00:09:48.536 "data_size": 63488 00:09:48.536 } 00:09:48.536 ] 00:09:48.536 }' 00:09:48.536 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.536 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.794 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.794 18:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:48.794 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.794 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.053 18:07:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.053 [2024-12-06 18:07:01.020778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.053 "name": "Existed_Raid", 00:09:49.053 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:49.053 "strip_size_kb": 64, 00:09:49.053 "state": "configuring", 00:09:49.053 "raid_level": "concat", 00:09:49.053 "superblock": true, 00:09:49.053 "num_base_bdevs": 3, 00:09:49.053 "num_base_bdevs_discovered": 2, 00:09:49.053 "num_base_bdevs_operational": 3, 00:09:49.053 "base_bdevs_list": [ 00:09:49.053 { 00:09:49.053 "name": "BaseBdev1", 00:09:49.053 "uuid": "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e", 00:09:49.053 "is_configured": true, 00:09:49.053 "data_offset": 2048, 00:09:49.053 "data_size": 63488 00:09:49.053 }, 00:09:49.053 { 00:09:49.053 "name": null, 00:09:49.053 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:49.053 "is_configured": false, 00:09:49.053 "data_offset": 0, 00:09:49.053 "data_size": 63488 00:09:49.053 }, 00:09:49.053 { 00:09:49.053 "name": "BaseBdev3", 00:09:49.053 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:49.053 "is_configured": true, 00:09:49.053 "data_offset": 2048, 00:09:49.053 "data_size": 63488 00:09:49.053 } 00:09:49.053 ] 00:09:49.053 }' 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.053 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.311 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.312 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.312 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.312 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.621 [2024-12-06 18:07:01.507988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.621 "name": "Existed_Raid", 00:09:49.621 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:49.621 "strip_size_kb": 64, 00:09:49.621 "state": "configuring", 00:09:49.621 "raid_level": "concat", 00:09:49.621 "superblock": true, 00:09:49.621 "num_base_bdevs": 3, 00:09:49.621 "num_base_bdevs_discovered": 1, 00:09:49.621 "num_base_bdevs_operational": 3, 00:09:49.621 "base_bdevs_list": [ 00:09:49.621 { 00:09:49.621 "name": null, 00:09:49.621 "uuid": "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e", 00:09:49.621 "is_configured": false, 00:09:49.621 "data_offset": 0, 00:09:49.621 "data_size": 63488 00:09:49.621 }, 00:09:49.621 { 00:09:49.621 "name": null, 00:09:49.621 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:49.621 "is_configured": false, 00:09:49.621 "data_offset": 0, 00:09:49.621 "data_size": 63488 00:09:49.621 }, 00:09:49.621 { 00:09:49.621 "name": "BaseBdev3", 00:09:49.621 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:49.621 "is_configured": true, 00:09:49.621 "data_offset": 2048, 00:09:49.621 "data_size": 63488 00:09:49.621 } 00:09:49.621 ] 00:09:49.621 }' 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.621 18:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.895 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.895 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:49.895 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.895 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.155 [2024-12-06 18:07:02.109750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.155 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.155 "name": "Existed_Raid", 00:09:50.155 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:50.155 "strip_size_kb": 64, 00:09:50.155 "state": "configuring", 00:09:50.155 "raid_level": "concat", 00:09:50.155 "superblock": true, 00:09:50.155 "num_base_bdevs": 3, 00:09:50.155 "num_base_bdevs_discovered": 2, 00:09:50.155 "num_base_bdevs_operational": 3, 00:09:50.155 "base_bdevs_list": [ 00:09:50.155 { 00:09:50.155 "name": null, 00:09:50.155 "uuid": "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e", 00:09:50.155 "is_configured": false, 00:09:50.155 "data_offset": 0, 00:09:50.155 "data_size": 63488 00:09:50.155 }, 00:09:50.155 { 00:09:50.155 "name": "BaseBdev2", 00:09:50.155 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:50.155 "is_configured": true, 00:09:50.155 "data_offset": 2048, 00:09:50.155 "data_size": 63488 00:09:50.155 }, 00:09:50.155 { 00:09:50.155 "name": "BaseBdev3", 00:09:50.155 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:50.155 "is_configured": true, 00:09:50.155 "data_offset": 2048, 00:09:50.155 "data_size": 63488 00:09:50.155 } 00:09:50.155 ] 00:09:50.155 }' 00:09:50.156 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.156 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.416 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.416 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.416 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.416 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.416 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.676 [2024-12-06 18:07:02.672927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:50.676 [2024-12-06 18:07:02.673193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:50.676 [2024-12-06 18:07:02.673212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:50.676 [2024-12-06 18:07:02.673488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:50.676 NewBaseBdev 00:09:50.676 [2024-12-06 18:07:02.673669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:50.676 [2024-12-06 18:07:02.673686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:50.676 [2024-12-06 18:07:02.673854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.676 [ 00:09:50.676 { 00:09:50.676 "name": "NewBaseBdev", 00:09:50.676 "aliases": [ 00:09:50.676 "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e" 00:09:50.676 ], 00:09:50.676 "product_name": "Malloc disk", 00:09:50.676 "block_size": 512, 00:09:50.676 "num_blocks": 65536, 00:09:50.676 "uuid": "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e", 00:09:50.676 "assigned_rate_limits": { 00:09:50.676 "rw_ios_per_sec": 0, 00:09:50.676 "rw_mbytes_per_sec": 0, 00:09:50.676 "r_mbytes_per_sec": 0, 00:09:50.676 "w_mbytes_per_sec": 0 00:09:50.676 }, 00:09:50.676 "claimed": true, 00:09:50.676 "claim_type": "exclusive_write", 00:09:50.676 "zoned": false, 00:09:50.676 "supported_io_types": { 00:09:50.676 "read": true, 00:09:50.676 "write": true, 00:09:50.676 "unmap": true, 00:09:50.676 "flush": true, 00:09:50.676 "reset": true, 00:09:50.676 "nvme_admin": false, 00:09:50.676 "nvme_io": false, 00:09:50.676 "nvme_io_md": false, 00:09:50.676 "write_zeroes": true, 00:09:50.676 "zcopy": true, 00:09:50.676 "get_zone_info": false, 00:09:50.676 "zone_management": false, 00:09:50.676 "zone_append": false, 00:09:50.676 "compare": false, 00:09:50.676 "compare_and_write": false, 00:09:50.676 "abort": true, 00:09:50.676 "seek_hole": false, 00:09:50.676 "seek_data": false, 00:09:50.676 "copy": true, 00:09:50.676 "nvme_iov_md": false 00:09:50.676 }, 00:09:50.676 "memory_domains": [ 00:09:50.676 { 00:09:50.676 "dma_device_id": "system", 00:09:50.676 "dma_device_type": 1 00:09:50.676 }, 00:09:50.676 { 00:09:50.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.676 "dma_device_type": 2 00:09:50.676 } 00:09:50.676 ], 00:09:50.676 "driver_specific": {} 00:09:50.676 } 00:09:50.676 ] 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.676 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.676 "name": "Existed_Raid", 00:09:50.676 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:50.676 "strip_size_kb": 64, 00:09:50.676 "state": "online", 00:09:50.676 "raid_level": "concat", 00:09:50.676 "superblock": true, 00:09:50.676 "num_base_bdevs": 3, 00:09:50.676 "num_base_bdevs_discovered": 3, 00:09:50.677 "num_base_bdevs_operational": 3, 00:09:50.677 "base_bdevs_list": [ 00:09:50.677 { 00:09:50.677 "name": "NewBaseBdev", 00:09:50.677 "uuid": "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e", 00:09:50.677 "is_configured": true, 00:09:50.677 "data_offset": 2048, 00:09:50.677 "data_size": 63488 00:09:50.677 }, 00:09:50.677 { 00:09:50.677 "name": "BaseBdev2", 00:09:50.677 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:50.677 "is_configured": true, 00:09:50.677 "data_offset": 2048, 00:09:50.677 "data_size": 63488 00:09:50.677 }, 00:09:50.677 { 00:09:50.677 "name": "BaseBdev3", 00:09:50.677 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:50.677 "is_configured": true, 00:09:50.677 "data_offset": 2048, 00:09:50.677 "data_size": 63488 00:09:50.677 } 00:09:50.677 ] 00:09:50.677 }' 00:09:50.677 18:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.677 18:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.243 [2024-12-06 18:07:03.160475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.243 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.243 "name": "Existed_Raid", 00:09:51.243 "aliases": [ 00:09:51.243 "ac996c0a-9384-45ba-87d5-ec6fb3638657" 00:09:51.243 ], 00:09:51.243 "product_name": "Raid Volume", 00:09:51.244 "block_size": 512, 00:09:51.244 "num_blocks": 190464, 00:09:51.244 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:51.244 "assigned_rate_limits": { 00:09:51.244 "rw_ios_per_sec": 0, 00:09:51.244 "rw_mbytes_per_sec": 0, 00:09:51.244 "r_mbytes_per_sec": 0, 00:09:51.244 "w_mbytes_per_sec": 0 00:09:51.244 }, 00:09:51.244 "claimed": false, 00:09:51.244 "zoned": false, 00:09:51.244 "supported_io_types": { 00:09:51.244 "read": true, 00:09:51.244 "write": true, 00:09:51.244 "unmap": true, 00:09:51.244 "flush": true, 00:09:51.244 "reset": true, 00:09:51.244 "nvme_admin": false, 00:09:51.244 "nvme_io": false, 00:09:51.244 "nvme_io_md": false, 00:09:51.244 "write_zeroes": true, 00:09:51.244 "zcopy": false, 00:09:51.244 "get_zone_info": false, 00:09:51.244 "zone_management": false, 00:09:51.244 "zone_append": false, 00:09:51.244 "compare": false, 00:09:51.244 "compare_and_write": false, 00:09:51.244 "abort": false, 00:09:51.244 "seek_hole": false, 00:09:51.244 "seek_data": false, 00:09:51.244 "copy": false, 00:09:51.244 "nvme_iov_md": false 00:09:51.244 }, 00:09:51.244 "memory_domains": [ 00:09:51.244 { 00:09:51.244 "dma_device_id": "system", 00:09:51.244 "dma_device_type": 1 00:09:51.244 }, 00:09:51.244 { 00:09:51.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.244 "dma_device_type": 2 00:09:51.244 }, 00:09:51.244 { 00:09:51.244 "dma_device_id": "system", 00:09:51.244 "dma_device_type": 1 00:09:51.244 }, 00:09:51.244 { 00:09:51.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.244 "dma_device_type": 2 00:09:51.244 }, 00:09:51.244 { 00:09:51.244 "dma_device_id": "system", 00:09:51.244 "dma_device_type": 1 00:09:51.244 }, 00:09:51.244 { 00:09:51.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.244 "dma_device_type": 2 00:09:51.244 } 00:09:51.244 ], 00:09:51.244 "driver_specific": { 00:09:51.244 "raid": { 00:09:51.244 "uuid": "ac996c0a-9384-45ba-87d5-ec6fb3638657", 00:09:51.244 "strip_size_kb": 64, 00:09:51.244 "state": "online", 00:09:51.244 "raid_level": "concat", 00:09:51.244 "superblock": true, 00:09:51.244 "num_base_bdevs": 3, 00:09:51.244 "num_base_bdevs_discovered": 3, 00:09:51.244 "num_base_bdevs_operational": 3, 00:09:51.244 "base_bdevs_list": [ 00:09:51.244 { 00:09:51.244 "name": "NewBaseBdev", 00:09:51.244 "uuid": "5e14dfc1-d3e0-4884-85fc-a5bf25c8fe5e", 00:09:51.244 "is_configured": true, 00:09:51.244 "data_offset": 2048, 00:09:51.244 "data_size": 63488 00:09:51.244 }, 00:09:51.244 { 00:09:51.244 "name": "BaseBdev2", 00:09:51.244 "uuid": "9f84b033-43e3-4397-9409-f12490a4fda2", 00:09:51.244 "is_configured": true, 00:09:51.244 "data_offset": 2048, 00:09:51.244 "data_size": 63488 00:09:51.244 }, 00:09:51.244 { 00:09:51.244 "name": "BaseBdev3", 00:09:51.244 "uuid": "56cc47d7-fe42-451a-8191-60fc057da164", 00:09:51.244 "is_configured": true, 00:09:51.244 "data_offset": 2048, 00:09:51.244 "data_size": 63488 00:09:51.244 } 00:09:51.244 ] 00:09:51.244 } 00:09:51.244 } 00:09:51.244 }' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:51.244 BaseBdev2 00:09:51.244 BaseBdev3' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.244 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.244 [2024-12-06 18:07:03.407760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.503 [2024-12-06 18:07:03.407838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.503 [2024-12-06 18:07:03.407942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.503 [2024-12-06 18:07:03.408008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.503 [2024-12-06 18:07:03.408021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66664 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66664 ']' 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66664 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66664 00:09:51.503 killing process with pid 66664 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66664' 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66664 00:09:51.503 18:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66664 00:09:51.503 [2024-12-06 18:07:03.450984] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.763 [2024-12-06 18:07:03.757445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.237 18:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:53.237 00:09:53.237 real 0m10.753s 00:09:53.237 user 0m17.065s 00:09:53.237 sys 0m1.811s 00:09:53.237 18:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.237 ************************************ 00:09:53.237 END TEST raid_state_function_test_sb 00:09:53.237 ************************************ 00:09:53.237 18:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.237 18:07:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:53.237 18:07:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:53.237 18:07:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.237 18:07:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.237 ************************************ 00:09:53.237 START TEST raid_superblock_test 00:09:53.237 ************************************ 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67290 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67290 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67290 ']' 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.237 18:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.237 [2024-12-06 18:07:05.121341] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:09:53.237 [2024-12-06 18:07:05.121556] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67290 ] 00:09:53.237 [2024-12-06 18:07:05.297545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.495 [2024-12-06 18:07:05.415296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.495 [2024-12-06 18:07:05.610668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.495 [2024-12-06 18:07:05.610727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.062 malloc1 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.062 [2024-12-06 18:07:06.105609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.062 [2024-12-06 18:07:06.105679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.062 [2024-12-06 18:07:06.105705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:54.062 [2024-12-06 18:07:06.105715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.062 [2024-12-06 18:07:06.107940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.062 [2024-12-06 18:07:06.108021] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.062 pt1 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.062 malloc2 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.062 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.062 [2024-12-06 18:07:06.162174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.062 [2024-12-06 18:07:06.162240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.062 [2024-12-06 18:07:06.162268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:54.062 [2024-12-06 18:07:06.162277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.062 [2024-12-06 18:07:06.164477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.062 [2024-12-06 18:07:06.164556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.062 pt2 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.063 malloc3 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.063 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.322 [2024-12-06 18:07:06.230271] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.322 [2024-12-06 18:07:06.230336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.322 [2024-12-06 18:07:06.230361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:54.322 [2024-12-06 18:07:06.230371] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.322 [2024-12-06 18:07:06.232648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.322 [2024-12-06 18:07:06.232745] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.322 pt3 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.322 [2024-12-06 18:07:06.242273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.322 [2024-12-06 18:07:06.244119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.322 [2024-12-06 18:07:06.244191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.322 [2024-12-06 18:07:06.244365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:54.322 [2024-12-06 18:07:06.244379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:54.322 [2024-12-06 18:07:06.244649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:54.322 [2024-12-06 18:07:06.244809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:54.322 [2024-12-06 18:07:06.244817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:54.322 [2024-12-06 18:07:06.244966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.322 "name": "raid_bdev1", 00:09:54.322 "uuid": "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6", 00:09:54.322 "strip_size_kb": 64, 00:09:54.322 "state": "online", 00:09:54.322 "raid_level": "concat", 00:09:54.322 "superblock": true, 00:09:54.322 "num_base_bdevs": 3, 00:09:54.322 "num_base_bdevs_discovered": 3, 00:09:54.322 "num_base_bdevs_operational": 3, 00:09:54.322 "base_bdevs_list": [ 00:09:54.322 { 00:09:54.322 "name": "pt1", 00:09:54.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.322 "is_configured": true, 00:09:54.322 "data_offset": 2048, 00:09:54.322 "data_size": 63488 00:09:54.322 }, 00:09:54.322 { 00:09:54.322 "name": "pt2", 00:09:54.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.322 "is_configured": true, 00:09:54.322 "data_offset": 2048, 00:09:54.322 "data_size": 63488 00:09:54.322 }, 00:09:54.322 { 00:09:54.322 "name": "pt3", 00:09:54.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.322 "is_configured": true, 00:09:54.322 "data_offset": 2048, 00:09:54.322 "data_size": 63488 00:09:54.322 } 00:09:54.322 ] 00:09:54.322 }' 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.322 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.581 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:54.581 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:54.581 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.582 [2024-12-06 18:07:06.677864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.582 "name": "raid_bdev1", 00:09:54.582 "aliases": [ 00:09:54.582 "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6" 00:09:54.582 ], 00:09:54.582 "product_name": "Raid Volume", 00:09:54.582 "block_size": 512, 00:09:54.582 "num_blocks": 190464, 00:09:54.582 "uuid": "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6", 00:09:54.582 "assigned_rate_limits": { 00:09:54.582 "rw_ios_per_sec": 0, 00:09:54.582 "rw_mbytes_per_sec": 0, 00:09:54.582 "r_mbytes_per_sec": 0, 00:09:54.582 "w_mbytes_per_sec": 0 00:09:54.582 }, 00:09:54.582 "claimed": false, 00:09:54.582 "zoned": false, 00:09:54.582 "supported_io_types": { 00:09:54.582 "read": true, 00:09:54.582 "write": true, 00:09:54.582 "unmap": true, 00:09:54.582 "flush": true, 00:09:54.582 "reset": true, 00:09:54.582 "nvme_admin": false, 00:09:54.582 "nvme_io": false, 00:09:54.582 "nvme_io_md": false, 00:09:54.582 "write_zeroes": true, 00:09:54.582 "zcopy": false, 00:09:54.582 "get_zone_info": false, 00:09:54.582 "zone_management": false, 00:09:54.582 "zone_append": false, 00:09:54.582 "compare": false, 00:09:54.582 "compare_and_write": false, 00:09:54.582 "abort": false, 00:09:54.582 "seek_hole": false, 00:09:54.582 "seek_data": false, 00:09:54.582 "copy": false, 00:09:54.582 "nvme_iov_md": false 00:09:54.582 }, 00:09:54.582 "memory_domains": [ 00:09:54.582 { 00:09:54.582 "dma_device_id": "system", 00:09:54.582 "dma_device_type": 1 00:09:54.582 }, 00:09:54.582 { 00:09:54.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.582 "dma_device_type": 2 00:09:54.582 }, 00:09:54.582 { 00:09:54.582 "dma_device_id": "system", 00:09:54.582 "dma_device_type": 1 00:09:54.582 }, 00:09:54.582 { 00:09:54.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.582 "dma_device_type": 2 00:09:54.582 }, 00:09:54.582 { 00:09:54.582 "dma_device_id": "system", 00:09:54.582 "dma_device_type": 1 00:09:54.582 }, 00:09:54.582 { 00:09:54.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.582 "dma_device_type": 2 00:09:54.582 } 00:09:54.582 ], 00:09:54.582 "driver_specific": { 00:09:54.582 "raid": { 00:09:54.582 "uuid": "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6", 00:09:54.582 "strip_size_kb": 64, 00:09:54.582 "state": "online", 00:09:54.582 "raid_level": "concat", 00:09:54.582 "superblock": true, 00:09:54.582 "num_base_bdevs": 3, 00:09:54.582 "num_base_bdevs_discovered": 3, 00:09:54.582 "num_base_bdevs_operational": 3, 00:09:54.582 "base_bdevs_list": [ 00:09:54.582 { 00:09:54.582 "name": "pt1", 00:09:54.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.582 "is_configured": true, 00:09:54.582 "data_offset": 2048, 00:09:54.582 "data_size": 63488 00:09:54.582 }, 00:09:54.582 { 00:09:54.582 "name": "pt2", 00:09:54.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.582 "is_configured": true, 00:09:54.582 "data_offset": 2048, 00:09:54.582 "data_size": 63488 00:09:54.582 }, 00:09:54.582 { 00:09:54.582 "name": "pt3", 00:09:54.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.582 "is_configured": true, 00:09:54.582 "data_offset": 2048, 00:09:54.582 "data_size": 63488 00:09:54.582 } 00:09:54.582 ] 00:09:54.582 } 00:09:54.582 } 00:09:54.582 }' 00:09:54.582 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:54.842 pt2 00:09:54.842 pt3' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.842 [2024-12-06 18:07:06.961336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c66f5174-8b50-4bf1-8e1f-4a55dadab6e6 00:09:54.842 18:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c66f5174-8b50-4bf1-8e1f-4a55dadab6e6 ']' 00:09:54.842 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.842 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.842 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.842 [2024-12-06 18:07:07.004960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.842 [2024-12-06 18:07:07.005035] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.842 [2024-12-06 18:07:07.005223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.842 [2024-12-06 18:07:07.005343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.842 [2024-12-06 18:07:07.005406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.102 [2024-12-06 18:07:07.128795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:55.102 [2024-12-06 18:07:07.130690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:55.102 [2024-12-06 18:07:07.130741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:55.102 [2024-12-06 18:07:07.130794] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:55.102 [2024-12-06 18:07:07.130850] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:55.102 [2024-12-06 18:07:07.130870] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:55.102 [2024-12-06 18:07:07.130887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.102 [2024-12-06 18:07:07.130897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:55.102 request: 00:09:55.102 { 00:09:55.102 "name": "raid_bdev1", 00:09:55.102 "raid_level": "concat", 00:09:55.102 "base_bdevs": [ 00:09:55.102 "malloc1", 00:09:55.102 "malloc2", 00:09:55.102 "malloc3" 00:09:55.102 ], 00:09:55.102 "strip_size_kb": 64, 00:09:55.102 "superblock": false, 00:09:55.102 "method": "bdev_raid_create", 00:09:55.102 "req_id": 1 00:09:55.102 } 00:09:55.102 Got JSON-RPC error response 00:09:55.102 response: 00:09:55.102 { 00:09:55.102 "code": -17, 00:09:55.102 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:55.102 } 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:55.102 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 [2024-12-06 18:07:07.172635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:55.103 [2024-12-06 18:07:07.172742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.103 [2024-12-06 18:07:07.172797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:55.103 [2024-12-06 18:07:07.172830] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.103 [2024-12-06 18:07:07.175072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.103 [2024-12-06 18:07:07.175150] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:55.103 [2024-12-06 18:07:07.175265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:55.103 [2024-12-06 18:07:07.175362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:55.103 pt1 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.103 "name": "raid_bdev1", 00:09:55.103 "uuid": "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6", 00:09:55.103 "strip_size_kb": 64, 00:09:55.103 "state": "configuring", 00:09:55.103 "raid_level": "concat", 00:09:55.103 "superblock": true, 00:09:55.103 "num_base_bdevs": 3, 00:09:55.103 "num_base_bdevs_discovered": 1, 00:09:55.103 "num_base_bdevs_operational": 3, 00:09:55.103 "base_bdevs_list": [ 00:09:55.103 { 00:09:55.103 "name": "pt1", 00:09:55.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.103 "is_configured": true, 00:09:55.103 "data_offset": 2048, 00:09:55.103 "data_size": 63488 00:09:55.103 }, 00:09:55.103 { 00:09:55.103 "name": null, 00:09:55.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.103 "is_configured": false, 00:09:55.103 "data_offset": 2048, 00:09:55.103 "data_size": 63488 00:09:55.103 }, 00:09:55.103 { 00:09:55.103 "name": null, 00:09:55.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.103 "is_configured": false, 00:09:55.103 "data_offset": 2048, 00:09:55.103 "data_size": 63488 00:09:55.103 } 00:09:55.103 ] 00:09:55.103 }' 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.103 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.672 [2024-12-06 18:07:07.596036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:55.672 [2024-12-06 18:07:07.596124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.672 [2024-12-06 18:07:07.596159] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:55.672 [2024-12-06 18:07:07.596170] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.672 [2024-12-06 18:07:07.596681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.672 [2024-12-06 18:07:07.596708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:55.672 [2024-12-06 18:07:07.596804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:55.672 [2024-12-06 18:07:07.596834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:55.672 pt2 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.672 [2024-12-06 18:07:07.604018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.672 "name": "raid_bdev1", 00:09:55.672 "uuid": "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6", 00:09:55.672 "strip_size_kb": 64, 00:09:55.672 "state": "configuring", 00:09:55.672 "raid_level": "concat", 00:09:55.672 "superblock": true, 00:09:55.672 "num_base_bdevs": 3, 00:09:55.672 "num_base_bdevs_discovered": 1, 00:09:55.672 "num_base_bdevs_operational": 3, 00:09:55.672 "base_bdevs_list": [ 00:09:55.672 { 00:09:55.672 "name": "pt1", 00:09:55.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.672 "is_configured": true, 00:09:55.672 "data_offset": 2048, 00:09:55.672 "data_size": 63488 00:09:55.672 }, 00:09:55.672 { 00:09:55.672 "name": null, 00:09:55.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.672 "is_configured": false, 00:09:55.672 "data_offset": 0, 00:09:55.672 "data_size": 63488 00:09:55.672 }, 00:09:55.672 { 00:09:55.672 "name": null, 00:09:55.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.672 "is_configured": false, 00:09:55.672 "data_offset": 2048, 00:09:55.672 "data_size": 63488 00:09:55.672 } 00:09:55.672 ] 00:09:55.672 }' 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.672 18:07:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.932 [2024-12-06 18:07:08.071242] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:55.932 [2024-12-06 18:07:08.071370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.932 [2024-12-06 18:07:08.071407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:55.932 [2024-12-06 18:07:08.071438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.932 [2024-12-06 18:07:08.071995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.932 [2024-12-06 18:07:08.072078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:55.932 [2024-12-06 18:07:08.072220] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:55.932 [2024-12-06 18:07:08.072285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:55.932 pt2 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.932 [2024-12-06 18:07:08.083206] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:55.932 [2024-12-06 18:07:08.083324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.932 [2024-12-06 18:07:08.083362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:55.932 [2024-12-06 18:07:08.083396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.932 [2024-12-06 18:07:08.083869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.932 [2024-12-06 18:07:08.083949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:55.932 [2024-12-06 18:07:08.084059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:55.932 [2024-12-06 18:07:08.084130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:55.932 [2024-12-06 18:07:08.084317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.932 [2024-12-06 18:07:08.084362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:55.932 [2024-12-06 18:07:08.084656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:55.932 [2024-12-06 18:07:08.084851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.932 [2024-12-06 18:07:08.084879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:55.932 [2024-12-06 18:07:08.085057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.932 pt3 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.932 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.192 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.192 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.192 "name": "raid_bdev1", 00:09:56.192 "uuid": "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6", 00:09:56.192 "strip_size_kb": 64, 00:09:56.192 "state": "online", 00:09:56.192 "raid_level": "concat", 00:09:56.192 "superblock": true, 00:09:56.192 "num_base_bdevs": 3, 00:09:56.192 "num_base_bdevs_discovered": 3, 00:09:56.192 "num_base_bdevs_operational": 3, 00:09:56.192 "base_bdevs_list": [ 00:09:56.192 { 00:09:56.192 "name": "pt1", 00:09:56.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.192 "is_configured": true, 00:09:56.192 "data_offset": 2048, 00:09:56.192 "data_size": 63488 00:09:56.192 }, 00:09:56.192 { 00:09:56.192 "name": "pt2", 00:09:56.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.192 "is_configured": true, 00:09:56.192 "data_offset": 2048, 00:09:56.192 "data_size": 63488 00:09:56.192 }, 00:09:56.192 { 00:09:56.192 "name": "pt3", 00:09:56.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.192 "is_configured": true, 00:09:56.192 "data_offset": 2048, 00:09:56.192 "data_size": 63488 00:09:56.192 } 00:09:56.192 ] 00:09:56.192 }' 00:09:56.192 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.192 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.452 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:56.452 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:56.452 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.452 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.453 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.453 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.453 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.453 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.453 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.453 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.453 [2024-12-06 18:07:08.542784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.453 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.453 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.453 "name": "raid_bdev1", 00:09:56.453 "aliases": [ 00:09:56.453 "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6" 00:09:56.453 ], 00:09:56.453 "product_name": "Raid Volume", 00:09:56.453 "block_size": 512, 00:09:56.453 "num_blocks": 190464, 00:09:56.453 "uuid": "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6", 00:09:56.453 "assigned_rate_limits": { 00:09:56.453 "rw_ios_per_sec": 0, 00:09:56.453 "rw_mbytes_per_sec": 0, 00:09:56.453 "r_mbytes_per_sec": 0, 00:09:56.453 "w_mbytes_per_sec": 0 00:09:56.453 }, 00:09:56.453 "claimed": false, 00:09:56.453 "zoned": false, 00:09:56.453 "supported_io_types": { 00:09:56.453 "read": true, 00:09:56.453 "write": true, 00:09:56.453 "unmap": true, 00:09:56.453 "flush": true, 00:09:56.453 "reset": true, 00:09:56.453 "nvme_admin": false, 00:09:56.453 "nvme_io": false, 00:09:56.453 "nvme_io_md": false, 00:09:56.453 "write_zeroes": true, 00:09:56.453 "zcopy": false, 00:09:56.453 "get_zone_info": false, 00:09:56.453 "zone_management": false, 00:09:56.453 "zone_append": false, 00:09:56.453 "compare": false, 00:09:56.453 "compare_and_write": false, 00:09:56.453 "abort": false, 00:09:56.453 "seek_hole": false, 00:09:56.453 "seek_data": false, 00:09:56.453 "copy": false, 00:09:56.453 "nvme_iov_md": false 00:09:56.453 }, 00:09:56.453 "memory_domains": [ 00:09:56.453 { 00:09:56.453 "dma_device_id": "system", 00:09:56.453 "dma_device_type": 1 00:09:56.453 }, 00:09:56.453 { 00:09:56.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.453 "dma_device_type": 2 00:09:56.453 }, 00:09:56.453 { 00:09:56.453 "dma_device_id": "system", 00:09:56.453 "dma_device_type": 1 00:09:56.453 }, 00:09:56.453 { 00:09:56.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.453 "dma_device_type": 2 00:09:56.453 }, 00:09:56.453 { 00:09:56.453 "dma_device_id": "system", 00:09:56.453 "dma_device_type": 1 00:09:56.453 }, 00:09:56.453 { 00:09:56.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.453 "dma_device_type": 2 00:09:56.453 } 00:09:56.453 ], 00:09:56.453 "driver_specific": { 00:09:56.453 "raid": { 00:09:56.453 "uuid": "c66f5174-8b50-4bf1-8e1f-4a55dadab6e6", 00:09:56.453 "strip_size_kb": 64, 00:09:56.453 "state": "online", 00:09:56.453 "raid_level": "concat", 00:09:56.453 "superblock": true, 00:09:56.453 "num_base_bdevs": 3, 00:09:56.453 "num_base_bdevs_discovered": 3, 00:09:56.453 "num_base_bdevs_operational": 3, 00:09:56.453 "base_bdevs_list": [ 00:09:56.453 { 00:09:56.453 "name": "pt1", 00:09:56.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.453 "is_configured": true, 00:09:56.453 "data_offset": 2048, 00:09:56.453 "data_size": 63488 00:09:56.453 }, 00:09:56.453 { 00:09:56.453 "name": "pt2", 00:09:56.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.453 "is_configured": true, 00:09:56.453 "data_offset": 2048, 00:09:56.453 "data_size": 63488 00:09:56.453 }, 00:09:56.453 { 00:09:56.453 "name": "pt3", 00:09:56.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.453 "is_configured": true, 00:09:56.453 "data_offset": 2048, 00:09:56.453 "data_size": 63488 00:09:56.453 } 00:09:56.453 ] 00:09:56.453 } 00:09:56.453 } 00:09:56.453 }' 00:09:56.453 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:56.713 pt2 00:09:56.713 pt3' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.713 [2024-12-06 18:07:08.822396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c66f5174-8b50-4bf1-8e1f-4a55dadab6e6 '!=' c66f5174-8b50-4bf1-8e1f-4a55dadab6e6 ']' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67290 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67290 ']' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67290 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.713 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67290 00:09:56.973 killing process with pid 67290 00:09:56.973 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.973 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.973 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67290' 00:09:56.973 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67290 00:09:56.973 18:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67290 00:09:56.973 [2024-12-06 18:07:08.892535] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.973 [2024-12-06 18:07:08.892667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.973 [2024-12-06 18:07:08.892762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.973 [2024-12-06 18:07:08.892787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:57.232 [2024-12-06 18:07:09.211960] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.613 ************************************ 00:09:58.613 END TEST raid_superblock_test 00:09:58.613 ************************************ 00:09:58.613 18:07:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:58.613 00:09:58.613 real 0m5.363s 00:09:58.613 user 0m7.695s 00:09:58.613 sys 0m0.881s 00:09:58.613 18:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.613 18:07:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.613 18:07:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:58.613 18:07:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:58.613 18:07:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.613 18:07:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.613 ************************************ 00:09:58.613 START TEST raid_read_error_test 00:09:58.613 ************************************ 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tSJOu9akFP 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67543 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67543 00:09:58.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67543 ']' 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.613 18:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.613 [2024-12-06 18:07:10.570705] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:09:58.613 [2024-12-06 18:07:10.570827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67543 ] 00:09:58.613 [2024-12-06 18:07:10.728406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.873 [2024-12-06 18:07:10.849271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.131 [2024-12-06 18:07:11.059267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.131 [2024-12-06 18:07:11.059358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.493 BaseBdev1_malloc 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.493 true 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.493 [2024-12-06 18:07:11.480285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:59.493 [2024-12-06 18:07:11.480342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.493 [2024-12-06 18:07:11.480363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:59.493 [2024-12-06 18:07:11.480375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.493 [2024-12-06 18:07:11.482509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.493 [2024-12-06 18:07:11.482553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:59.493 BaseBdev1 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.493 BaseBdev2_malloc 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.493 true 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.493 [2024-12-06 18:07:11.542823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:59.493 [2024-12-06 18:07:11.542880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.493 [2024-12-06 18:07:11.542899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:59.493 [2024-12-06 18:07:11.542910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.493 [2024-12-06 18:07:11.545203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.493 [2024-12-06 18:07:11.545244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:59.493 BaseBdev2 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.493 BaseBdev3_malloc 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.493 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.758 true 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.758 [2024-12-06 18:07:11.620595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:59.758 [2024-12-06 18:07:11.620729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.758 [2024-12-06 18:07:11.620753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:59.758 [2024-12-06 18:07:11.620764] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.758 [2024-12-06 18:07:11.622913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.758 [2024-12-06 18:07:11.622954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:59.758 BaseBdev3 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.758 [2024-12-06 18:07:11.628675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.758 [2024-12-06 18:07:11.630492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.758 [2024-12-06 18:07:11.630567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.758 [2024-12-06 18:07:11.630780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:59.758 [2024-12-06 18:07:11.630793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:59.758 [2024-12-06 18:07:11.631047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:59.758 [2024-12-06 18:07:11.631292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:59.758 [2024-12-06 18:07:11.631365] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:59.758 [2024-12-06 18:07:11.631601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.758 "name": "raid_bdev1", 00:09:59.758 "uuid": "b2930ec2-8113-48bc-8c5d-6ebdd2693d7e", 00:09:59.758 "strip_size_kb": 64, 00:09:59.758 "state": "online", 00:09:59.758 "raid_level": "concat", 00:09:59.758 "superblock": true, 00:09:59.758 "num_base_bdevs": 3, 00:09:59.758 "num_base_bdevs_discovered": 3, 00:09:59.758 "num_base_bdevs_operational": 3, 00:09:59.758 "base_bdevs_list": [ 00:09:59.758 { 00:09:59.758 "name": "BaseBdev1", 00:09:59.758 "uuid": "ee1ca975-c9c9-557a-b080-3b388c07f21a", 00:09:59.758 "is_configured": true, 00:09:59.758 "data_offset": 2048, 00:09:59.758 "data_size": 63488 00:09:59.758 }, 00:09:59.758 { 00:09:59.758 "name": "BaseBdev2", 00:09:59.758 "uuid": "0b19118a-0c6f-52a9-b872-a851ea901319", 00:09:59.758 "is_configured": true, 00:09:59.758 "data_offset": 2048, 00:09:59.758 "data_size": 63488 00:09:59.758 }, 00:09:59.758 { 00:09:59.758 "name": "BaseBdev3", 00:09:59.758 "uuid": "bc01d360-b155-5b99-9f77-19e4adbdcd4b", 00:09:59.758 "is_configured": true, 00:09:59.758 "data_offset": 2048, 00:09:59.758 "data_size": 63488 00:09:59.758 } 00:09:59.758 ] 00:09:59.758 }' 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.758 18:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.024 18:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:00.024 18:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:00.283 [2024-12-06 18:07:12.193079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:01.221 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:01.221 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.221 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.221 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.221 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:01.221 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:01.221 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.222 "name": "raid_bdev1", 00:10:01.222 "uuid": "b2930ec2-8113-48bc-8c5d-6ebdd2693d7e", 00:10:01.222 "strip_size_kb": 64, 00:10:01.222 "state": "online", 00:10:01.222 "raid_level": "concat", 00:10:01.222 "superblock": true, 00:10:01.222 "num_base_bdevs": 3, 00:10:01.222 "num_base_bdevs_discovered": 3, 00:10:01.222 "num_base_bdevs_operational": 3, 00:10:01.222 "base_bdevs_list": [ 00:10:01.222 { 00:10:01.222 "name": "BaseBdev1", 00:10:01.222 "uuid": "ee1ca975-c9c9-557a-b080-3b388c07f21a", 00:10:01.222 "is_configured": true, 00:10:01.222 "data_offset": 2048, 00:10:01.222 "data_size": 63488 00:10:01.222 }, 00:10:01.222 { 00:10:01.222 "name": "BaseBdev2", 00:10:01.222 "uuid": "0b19118a-0c6f-52a9-b872-a851ea901319", 00:10:01.222 "is_configured": true, 00:10:01.222 "data_offset": 2048, 00:10:01.222 "data_size": 63488 00:10:01.222 }, 00:10:01.222 { 00:10:01.222 "name": "BaseBdev3", 00:10:01.222 "uuid": "bc01d360-b155-5b99-9f77-19e4adbdcd4b", 00:10:01.222 "is_configured": true, 00:10:01.222 "data_offset": 2048, 00:10:01.222 "data_size": 63488 00:10:01.222 } 00:10:01.222 ] 00:10:01.222 }' 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.222 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.481 [2024-12-06 18:07:13.614366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.481 [2024-12-06 18:07:13.614444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.481 [2024-12-06 18:07:13.617660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.481 [2024-12-06 18:07:13.617755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.481 [2024-12-06 18:07:13.617819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.481 [2024-12-06 18:07:13.617866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:01.481 { 00:10:01.481 "results": [ 00:10:01.481 { 00:10:01.481 "job": "raid_bdev1", 00:10:01.481 "core_mask": "0x1", 00:10:01.481 "workload": "randrw", 00:10:01.481 "percentage": 50, 00:10:01.481 "status": "finished", 00:10:01.481 "queue_depth": 1, 00:10:01.481 "io_size": 131072, 00:10:01.481 "runtime": 1.42238, 00:10:01.481 "iops": 13790.267017252774, 00:10:01.481 "mibps": 1723.7833771565968, 00:10:01.481 "io_failed": 1, 00:10:01.481 "io_timeout": 0, 00:10:01.481 "avg_latency_us": 100.2561673208574, 00:10:01.481 "min_latency_us": 28.618340611353712, 00:10:01.481 "max_latency_us": 1631.2454148471616 00:10:01.481 } 00:10:01.481 ], 00:10:01.481 "core_count": 1 00:10:01.481 } 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67543 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67543 ']' 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67543 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.481 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67543 00:10:01.741 killing process with pid 67543 00:10:01.741 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.741 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.741 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67543' 00:10:01.741 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67543 00:10:01.741 18:07:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67543 00:10:01.741 [2024-12-06 18:07:13.651717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.741 [2024-12-06 18:07:13.902106] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tSJOu9akFP 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:03.120 ************************************ 00:10:03.120 END TEST raid_read_error_test 00:10:03.120 ************************************ 00:10:03.120 00:10:03.120 real 0m4.651s 00:10:03.120 user 0m5.565s 00:10:03.120 sys 0m0.569s 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.120 18:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.120 18:07:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:03.120 18:07:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.120 18:07:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.120 18:07:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.120 ************************************ 00:10:03.120 START TEST raid_write_error_test 00:10:03.120 ************************************ 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UXnXDKMSZH 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67688 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67688 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67688 ']' 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.120 18:07:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.379 [2024-12-06 18:07:15.290654] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:10:03.379 [2024-12-06 18:07:15.290779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67688 ] 00:10:03.379 [2024-12-06 18:07:15.465137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.638 [2024-12-06 18:07:15.583948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.638 [2024-12-06 18:07:15.792274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.638 [2024-12-06 18:07:15.792337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 BaseBdev1_malloc 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 true 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 [2024-12-06 18:07:16.190941] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.208 [2024-12-06 18:07:16.191047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.208 [2024-12-06 18:07:16.191116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:04.208 [2024-12-06 18:07:16.191149] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.208 [2024-12-06 18:07:16.193457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.208 [2024-12-06 18:07:16.193537] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.208 BaseBdev1 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 BaseBdev2_malloc 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 true 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 [2024-12-06 18:07:16.259498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:04.208 [2024-12-06 18:07:16.259557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.208 [2024-12-06 18:07:16.259577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:04.208 [2024-12-06 18:07:16.259589] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.208 [2024-12-06 18:07:16.261890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.208 [2024-12-06 18:07:16.261939] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:04.208 BaseBdev2 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 BaseBdev3_malloc 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 true 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 [2024-12-06 18:07:16.345776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:04.208 [2024-12-06 18:07:16.345830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.208 [2024-12-06 18:07:16.345866] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:04.208 [2024-12-06 18:07:16.345879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.208 [2024-12-06 18:07:16.348317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.208 [2024-12-06 18:07:16.348361] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:04.208 BaseBdev3 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.208 [2024-12-06 18:07:16.357840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.208 [2024-12-06 18:07:16.359915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.208 [2024-12-06 18:07:16.359998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.208 [2024-12-06 18:07:16.360241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.208 [2024-12-06 18:07:16.360257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:04.208 [2024-12-06 18:07:16.360530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:04.208 [2024-12-06 18:07:16.360706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.208 [2024-12-06 18:07:16.360721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:04.208 [2024-12-06 18:07:16.360871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.208 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.468 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.468 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.468 "name": "raid_bdev1", 00:10:04.468 "uuid": "6499ce40-7957-4c5e-8ee2-6f0d24d64f87", 00:10:04.468 "strip_size_kb": 64, 00:10:04.468 "state": "online", 00:10:04.468 "raid_level": "concat", 00:10:04.468 "superblock": true, 00:10:04.468 "num_base_bdevs": 3, 00:10:04.468 "num_base_bdevs_discovered": 3, 00:10:04.468 "num_base_bdevs_operational": 3, 00:10:04.468 "base_bdevs_list": [ 00:10:04.468 { 00:10:04.468 "name": "BaseBdev1", 00:10:04.468 "uuid": "e0316097-095e-5590-b241-353572ceca1e", 00:10:04.468 "is_configured": true, 00:10:04.468 "data_offset": 2048, 00:10:04.468 "data_size": 63488 00:10:04.468 }, 00:10:04.468 { 00:10:04.468 "name": "BaseBdev2", 00:10:04.468 "uuid": "52d33694-b347-51d3-b34c-3ef980c9d2ba", 00:10:04.468 "is_configured": true, 00:10:04.468 "data_offset": 2048, 00:10:04.468 "data_size": 63488 00:10:04.468 }, 00:10:04.468 { 00:10:04.468 "name": "BaseBdev3", 00:10:04.468 "uuid": "112cc160-b8a6-53af-9d74-c464042b9b2c", 00:10:04.468 "is_configured": true, 00:10:04.468 "data_offset": 2048, 00:10:04.468 "data_size": 63488 00:10:04.468 } 00:10:04.468 ] 00:10:04.468 }' 00:10:04.468 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.468 18:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.729 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:04.729 18:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:04.988 [2024-12-06 18:07:16.946253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.929 "name": "raid_bdev1", 00:10:05.929 "uuid": "6499ce40-7957-4c5e-8ee2-6f0d24d64f87", 00:10:05.929 "strip_size_kb": 64, 00:10:05.929 "state": "online", 00:10:05.929 "raid_level": "concat", 00:10:05.929 "superblock": true, 00:10:05.929 "num_base_bdevs": 3, 00:10:05.929 "num_base_bdevs_discovered": 3, 00:10:05.929 "num_base_bdevs_operational": 3, 00:10:05.929 "base_bdevs_list": [ 00:10:05.929 { 00:10:05.929 "name": "BaseBdev1", 00:10:05.929 "uuid": "e0316097-095e-5590-b241-353572ceca1e", 00:10:05.929 "is_configured": true, 00:10:05.929 "data_offset": 2048, 00:10:05.929 "data_size": 63488 00:10:05.929 }, 00:10:05.929 { 00:10:05.929 "name": "BaseBdev2", 00:10:05.929 "uuid": "52d33694-b347-51d3-b34c-3ef980c9d2ba", 00:10:05.929 "is_configured": true, 00:10:05.929 "data_offset": 2048, 00:10:05.929 "data_size": 63488 00:10:05.929 }, 00:10:05.929 { 00:10:05.929 "name": "BaseBdev3", 00:10:05.929 "uuid": "112cc160-b8a6-53af-9d74-c464042b9b2c", 00:10:05.929 "is_configured": true, 00:10:05.929 "data_offset": 2048, 00:10:05.929 "data_size": 63488 00:10:05.929 } 00:10:05.929 ] 00:10:05.929 }' 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.929 18:07:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.189 18:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.189 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.189 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.189 [2024-12-06 18:07:18.274277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.189 [2024-12-06 18:07:18.274315] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.189 [2024-12-06 18:07:18.277345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.189 [2024-12-06 18:07:18.277430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.189 [2024-12-06 18:07:18.277501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.190 [2024-12-06 18:07:18.277553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:06.190 { 00:10:06.190 "results": [ 00:10:06.190 { 00:10:06.190 "job": "raid_bdev1", 00:10:06.190 "core_mask": "0x1", 00:10:06.190 "workload": "randrw", 00:10:06.190 "percentage": 50, 00:10:06.190 "status": "finished", 00:10:06.190 "queue_depth": 1, 00:10:06.190 "io_size": 131072, 00:10:06.190 "runtime": 1.328743, 00:10:06.190 "iops": 14683.050070630663, 00:10:06.190 "mibps": 1835.3812588288329, 00:10:06.190 "io_failed": 1, 00:10:06.190 "io_timeout": 0, 00:10:06.190 "avg_latency_us": 94.25608619838009, 00:10:06.190 "min_latency_us": 27.72401746724891, 00:10:06.190 "max_latency_us": 1445.2262008733624 00:10:06.190 } 00:10:06.190 ], 00:10:06.190 "core_count": 1 00:10:06.190 } 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67688 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67688 ']' 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67688 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67688 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67688' 00:10:06.190 killing process with pid 67688 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67688 00:10:06.190 [2024-12-06 18:07:18.324074] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.190 18:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67688 00:10:06.450 [2024-12-06 18:07:18.580030] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UXnXDKMSZH 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:07.833 00:10:07.833 real 0m4.640s 00:10:07.833 user 0m5.556s 00:10:07.833 sys 0m0.550s 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.833 18:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.833 ************************************ 00:10:07.833 END TEST raid_write_error_test 00:10:07.833 ************************************ 00:10:07.833 18:07:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:07.833 18:07:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:07.833 18:07:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:07.833 18:07:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.833 18:07:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.833 ************************************ 00:10:07.833 START TEST raid_state_function_test 00:10:07.833 ************************************ 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67832 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67832' 00:10:07.833 Process raid pid: 67832 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67832 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67832 ']' 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.833 18:07:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.833 [2024-12-06 18:07:19.996938] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:10:07.833 [2024-12-06 18:07:19.997268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.093 [2024-12-06 18:07:20.177398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.353 [2024-12-06 18:07:20.295719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.353 [2024-12-06 18:07:20.502608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.353 [2024-12-06 18:07:20.502648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.924 [2024-12-06 18:07:20.870284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.924 [2024-12-06 18:07:20.870393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.924 [2024-12-06 18:07:20.870442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.924 [2024-12-06 18:07:20.870469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.924 [2024-12-06 18:07:20.870491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.924 [2024-12-06 18:07:20.870515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.924 "name": "Existed_Raid", 00:10:08.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.924 "strip_size_kb": 0, 00:10:08.924 "state": "configuring", 00:10:08.924 "raid_level": "raid1", 00:10:08.924 "superblock": false, 00:10:08.924 "num_base_bdevs": 3, 00:10:08.924 "num_base_bdevs_discovered": 0, 00:10:08.924 "num_base_bdevs_operational": 3, 00:10:08.924 "base_bdevs_list": [ 00:10:08.924 { 00:10:08.924 "name": "BaseBdev1", 00:10:08.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.924 "is_configured": false, 00:10:08.924 "data_offset": 0, 00:10:08.924 "data_size": 0 00:10:08.924 }, 00:10:08.924 { 00:10:08.924 "name": "BaseBdev2", 00:10:08.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.924 "is_configured": false, 00:10:08.924 "data_offset": 0, 00:10:08.924 "data_size": 0 00:10:08.924 }, 00:10:08.924 { 00:10:08.924 "name": "BaseBdev3", 00:10:08.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.924 "is_configured": false, 00:10:08.924 "data_offset": 0, 00:10:08.924 "data_size": 0 00:10:08.924 } 00:10:08.924 ] 00:10:08.924 }' 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.924 18:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.184 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.184 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.184 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.184 [2024-12-06 18:07:21.349413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.184 [2024-12-06 18:07:21.349504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.444 [2024-12-06 18:07:21.361356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.444 [2024-12-06 18:07:21.361436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.444 [2024-12-06 18:07:21.361463] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.444 [2024-12-06 18:07:21.361484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.444 [2024-12-06 18:07:21.361502] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.444 [2024-12-06 18:07:21.361522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.444 [2024-12-06 18:07:21.408501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.444 BaseBdev1 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.444 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.445 [ 00:10:09.445 { 00:10:09.445 "name": "BaseBdev1", 00:10:09.445 "aliases": [ 00:10:09.445 "20c4548c-419c-4b62-ba43-130af0e50f31" 00:10:09.445 ], 00:10:09.445 "product_name": "Malloc disk", 00:10:09.445 "block_size": 512, 00:10:09.445 "num_blocks": 65536, 00:10:09.445 "uuid": "20c4548c-419c-4b62-ba43-130af0e50f31", 00:10:09.445 "assigned_rate_limits": { 00:10:09.445 "rw_ios_per_sec": 0, 00:10:09.445 "rw_mbytes_per_sec": 0, 00:10:09.445 "r_mbytes_per_sec": 0, 00:10:09.445 "w_mbytes_per_sec": 0 00:10:09.445 }, 00:10:09.445 "claimed": true, 00:10:09.445 "claim_type": "exclusive_write", 00:10:09.445 "zoned": false, 00:10:09.445 "supported_io_types": { 00:10:09.445 "read": true, 00:10:09.445 "write": true, 00:10:09.445 "unmap": true, 00:10:09.445 "flush": true, 00:10:09.445 "reset": true, 00:10:09.445 "nvme_admin": false, 00:10:09.445 "nvme_io": false, 00:10:09.445 "nvme_io_md": false, 00:10:09.445 "write_zeroes": true, 00:10:09.445 "zcopy": true, 00:10:09.445 "get_zone_info": false, 00:10:09.445 "zone_management": false, 00:10:09.445 "zone_append": false, 00:10:09.445 "compare": false, 00:10:09.445 "compare_and_write": false, 00:10:09.445 "abort": true, 00:10:09.445 "seek_hole": false, 00:10:09.445 "seek_data": false, 00:10:09.445 "copy": true, 00:10:09.445 "nvme_iov_md": false 00:10:09.445 }, 00:10:09.445 "memory_domains": [ 00:10:09.445 { 00:10:09.445 "dma_device_id": "system", 00:10:09.445 "dma_device_type": 1 00:10:09.445 }, 00:10:09.445 { 00:10:09.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.445 "dma_device_type": 2 00:10:09.445 } 00:10:09.445 ], 00:10:09.445 "driver_specific": {} 00:10:09.445 } 00:10:09.445 ] 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.445 "name": "Existed_Raid", 00:10:09.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.445 "strip_size_kb": 0, 00:10:09.445 "state": "configuring", 00:10:09.445 "raid_level": "raid1", 00:10:09.445 "superblock": false, 00:10:09.445 "num_base_bdevs": 3, 00:10:09.445 "num_base_bdevs_discovered": 1, 00:10:09.445 "num_base_bdevs_operational": 3, 00:10:09.445 "base_bdevs_list": [ 00:10:09.445 { 00:10:09.445 "name": "BaseBdev1", 00:10:09.445 "uuid": "20c4548c-419c-4b62-ba43-130af0e50f31", 00:10:09.445 "is_configured": true, 00:10:09.445 "data_offset": 0, 00:10:09.445 "data_size": 65536 00:10:09.445 }, 00:10:09.445 { 00:10:09.445 "name": "BaseBdev2", 00:10:09.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.445 "is_configured": false, 00:10:09.445 "data_offset": 0, 00:10:09.445 "data_size": 0 00:10:09.445 }, 00:10:09.445 { 00:10:09.445 "name": "BaseBdev3", 00:10:09.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.445 "is_configured": false, 00:10:09.445 "data_offset": 0, 00:10:09.445 "data_size": 0 00:10:09.445 } 00:10:09.445 ] 00:10:09.445 }' 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.445 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.014 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.014 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.014 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.014 [2024-12-06 18:07:21.911735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.014 [2024-12-06 18:07:21.911894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:10.014 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.014 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.014 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.014 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.015 [2024-12-06 18:07:21.923778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.015 [2024-12-06 18:07:21.925849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.015 [2024-12-06 18:07:21.925931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.015 [2024-12-06 18:07:21.925980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.015 [2024-12-06 18:07:21.926016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.015 "name": "Existed_Raid", 00:10:10.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.015 "strip_size_kb": 0, 00:10:10.015 "state": "configuring", 00:10:10.015 "raid_level": "raid1", 00:10:10.015 "superblock": false, 00:10:10.015 "num_base_bdevs": 3, 00:10:10.015 "num_base_bdevs_discovered": 1, 00:10:10.015 "num_base_bdevs_operational": 3, 00:10:10.015 "base_bdevs_list": [ 00:10:10.015 { 00:10:10.015 "name": "BaseBdev1", 00:10:10.015 "uuid": "20c4548c-419c-4b62-ba43-130af0e50f31", 00:10:10.015 "is_configured": true, 00:10:10.015 "data_offset": 0, 00:10:10.015 "data_size": 65536 00:10:10.015 }, 00:10:10.015 { 00:10:10.015 "name": "BaseBdev2", 00:10:10.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.015 "is_configured": false, 00:10:10.015 "data_offset": 0, 00:10:10.015 "data_size": 0 00:10:10.015 }, 00:10:10.015 { 00:10:10.015 "name": "BaseBdev3", 00:10:10.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.015 "is_configured": false, 00:10:10.015 "data_offset": 0, 00:10:10.015 "data_size": 0 00:10:10.015 } 00:10:10.015 ] 00:10:10.015 }' 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.015 18:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.274 [2024-12-06 18:07:22.414307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.274 BaseBdev2 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.274 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.274 [ 00:10:10.274 { 00:10:10.274 "name": "BaseBdev2", 00:10:10.274 "aliases": [ 00:10:10.534 "4f961c8b-81b0-45bd-81a7-54b50946f2cd" 00:10:10.534 ], 00:10:10.534 "product_name": "Malloc disk", 00:10:10.534 "block_size": 512, 00:10:10.534 "num_blocks": 65536, 00:10:10.534 "uuid": "4f961c8b-81b0-45bd-81a7-54b50946f2cd", 00:10:10.534 "assigned_rate_limits": { 00:10:10.534 "rw_ios_per_sec": 0, 00:10:10.534 "rw_mbytes_per_sec": 0, 00:10:10.534 "r_mbytes_per_sec": 0, 00:10:10.534 "w_mbytes_per_sec": 0 00:10:10.534 }, 00:10:10.534 "claimed": true, 00:10:10.534 "claim_type": "exclusive_write", 00:10:10.534 "zoned": false, 00:10:10.534 "supported_io_types": { 00:10:10.534 "read": true, 00:10:10.534 "write": true, 00:10:10.534 "unmap": true, 00:10:10.534 "flush": true, 00:10:10.534 "reset": true, 00:10:10.534 "nvme_admin": false, 00:10:10.534 "nvme_io": false, 00:10:10.534 "nvme_io_md": false, 00:10:10.534 "write_zeroes": true, 00:10:10.534 "zcopy": true, 00:10:10.534 "get_zone_info": false, 00:10:10.534 "zone_management": false, 00:10:10.534 "zone_append": false, 00:10:10.534 "compare": false, 00:10:10.534 "compare_and_write": false, 00:10:10.534 "abort": true, 00:10:10.534 "seek_hole": false, 00:10:10.534 "seek_data": false, 00:10:10.534 "copy": true, 00:10:10.534 "nvme_iov_md": false 00:10:10.534 }, 00:10:10.534 "memory_domains": [ 00:10:10.534 { 00:10:10.534 "dma_device_id": "system", 00:10:10.534 "dma_device_type": 1 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.534 "dma_device_type": 2 00:10:10.534 } 00:10:10.534 ], 00:10:10.534 "driver_specific": {} 00:10:10.534 } 00:10:10.534 ] 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.534 "name": "Existed_Raid", 00:10:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.534 "strip_size_kb": 0, 00:10:10.534 "state": "configuring", 00:10:10.534 "raid_level": "raid1", 00:10:10.534 "superblock": false, 00:10:10.534 "num_base_bdevs": 3, 00:10:10.534 "num_base_bdevs_discovered": 2, 00:10:10.534 "num_base_bdevs_operational": 3, 00:10:10.534 "base_bdevs_list": [ 00:10:10.534 { 00:10:10.534 "name": "BaseBdev1", 00:10:10.534 "uuid": "20c4548c-419c-4b62-ba43-130af0e50f31", 00:10:10.534 "is_configured": true, 00:10:10.534 "data_offset": 0, 00:10:10.534 "data_size": 65536 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "name": "BaseBdev2", 00:10:10.534 "uuid": "4f961c8b-81b0-45bd-81a7-54b50946f2cd", 00:10:10.534 "is_configured": true, 00:10:10.534 "data_offset": 0, 00:10:10.534 "data_size": 65536 00:10:10.534 }, 00:10:10.534 { 00:10:10.534 "name": "BaseBdev3", 00:10:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.534 "is_configured": false, 00:10:10.534 "data_offset": 0, 00:10:10.534 "data_size": 0 00:10:10.534 } 00:10:10.534 ] 00:10:10.534 }' 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.534 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.794 [2024-12-06 18:07:22.934790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.794 [2024-12-06 18:07:22.934936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:10.794 [2024-12-06 18:07:22.934970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:10.794 [2024-12-06 18:07:22.935298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:10.794 [2024-12-06 18:07:22.935533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:10.794 [2024-12-06 18:07:22.935576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:10.794 [2024-12-06 18:07:22.935892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.794 BaseBdev3 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.794 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.795 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.795 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.795 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.054 [ 00:10:11.054 { 00:10:11.054 "name": "BaseBdev3", 00:10:11.054 "aliases": [ 00:10:11.054 "b13705a6-3dcb-41e8-9c10-038856a86fd7" 00:10:11.054 ], 00:10:11.054 "product_name": "Malloc disk", 00:10:11.054 "block_size": 512, 00:10:11.054 "num_blocks": 65536, 00:10:11.054 "uuid": "b13705a6-3dcb-41e8-9c10-038856a86fd7", 00:10:11.054 "assigned_rate_limits": { 00:10:11.054 "rw_ios_per_sec": 0, 00:10:11.054 "rw_mbytes_per_sec": 0, 00:10:11.054 "r_mbytes_per_sec": 0, 00:10:11.054 "w_mbytes_per_sec": 0 00:10:11.054 }, 00:10:11.054 "claimed": true, 00:10:11.054 "claim_type": "exclusive_write", 00:10:11.054 "zoned": false, 00:10:11.054 "supported_io_types": { 00:10:11.054 "read": true, 00:10:11.054 "write": true, 00:10:11.054 "unmap": true, 00:10:11.054 "flush": true, 00:10:11.054 "reset": true, 00:10:11.054 "nvme_admin": false, 00:10:11.054 "nvme_io": false, 00:10:11.054 "nvme_io_md": false, 00:10:11.054 "write_zeroes": true, 00:10:11.054 "zcopy": true, 00:10:11.055 "get_zone_info": false, 00:10:11.055 "zone_management": false, 00:10:11.055 "zone_append": false, 00:10:11.055 "compare": false, 00:10:11.055 "compare_and_write": false, 00:10:11.055 "abort": true, 00:10:11.055 "seek_hole": false, 00:10:11.055 "seek_data": false, 00:10:11.055 "copy": true, 00:10:11.055 "nvme_iov_md": false 00:10:11.055 }, 00:10:11.055 "memory_domains": [ 00:10:11.055 { 00:10:11.055 "dma_device_id": "system", 00:10:11.055 "dma_device_type": 1 00:10:11.055 }, 00:10:11.055 { 00:10:11.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.055 "dma_device_type": 2 00:10:11.055 } 00:10:11.055 ], 00:10:11.055 "driver_specific": {} 00:10:11.055 } 00:10:11.055 ] 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.055 18:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.055 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.055 "name": "Existed_Raid", 00:10:11.055 "uuid": "e87fb51b-3de6-430c-b128-82bfafb827d3", 00:10:11.055 "strip_size_kb": 0, 00:10:11.055 "state": "online", 00:10:11.055 "raid_level": "raid1", 00:10:11.055 "superblock": false, 00:10:11.055 "num_base_bdevs": 3, 00:10:11.055 "num_base_bdevs_discovered": 3, 00:10:11.055 "num_base_bdevs_operational": 3, 00:10:11.055 "base_bdevs_list": [ 00:10:11.055 { 00:10:11.055 "name": "BaseBdev1", 00:10:11.055 "uuid": "20c4548c-419c-4b62-ba43-130af0e50f31", 00:10:11.055 "is_configured": true, 00:10:11.055 "data_offset": 0, 00:10:11.055 "data_size": 65536 00:10:11.055 }, 00:10:11.055 { 00:10:11.055 "name": "BaseBdev2", 00:10:11.055 "uuid": "4f961c8b-81b0-45bd-81a7-54b50946f2cd", 00:10:11.055 "is_configured": true, 00:10:11.055 "data_offset": 0, 00:10:11.055 "data_size": 65536 00:10:11.055 }, 00:10:11.055 { 00:10:11.055 "name": "BaseBdev3", 00:10:11.055 "uuid": "b13705a6-3dcb-41e8-9c10-038856a86fd7", 00:10:11.055 "is_configured": true, 00:10:11.055 "data_offset": 0, 00:10:11.055 "data_size": 65536 00:10:11.055 } 00:10:11.055 ] 00:10:11.055 }' 00:10:11.055 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.055 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.323 [2024-12-06 18:07:23.406445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.323 "name": "Existed_Raid", 00:10:11.323 "aliases": [ 00:10:11.323 "e87fb51b-3de6-430c-b128-82bfafb827d3" 00:10:11.323 ], 00:10:11.323 "product_name": "Raid Volume", 00:10:11.323 "block_size": 512, 00:10:11.323 "num_blocks": 65536, 00:10:11.323 "uuid": "e87fb51b-3de6-430c-b128-82bfafb827d3", 00:10:11.323 "assigned_rate_limits": { 00:10:11.323 "rw_ios_per_sec": 0, 00:10:11.323 "rw_mbytes_per_sec": 0, 00:10:11.323 "r_mbytes_per_sec": 0, 00:10:11.323 "w_mbytes_per_sec": 0 00:10:11.323 }, 00:10:11.323 "claimed": false, 00:10:11.323 "zoned": false, 00:10:11.323 "supported_io_types": { 00:10:11.323 "read": true, 00:10:11.323 "write": true, 00:10:11.323 "unmap": false, 00:10:11.323 "flush": false, 00:10:11.323 "reset": true, 00:10:11.323 "nvme_admin": false, 00:10:11.323 "nvme_io": false, 00:10:11.323 "nvme_io_md": false, 00:10:11.323 "write_zeroes": true, 00:10:11.323 "zcopy": false, 00:10:11.323 "get_zone_info": false, 00:10:11.323 "zone_management": false, 00:10:11.323 "zone_append": false, 00:10:11.323 "compare": false, 00:10:11.323 "compare_and_write": false, 00:10:11.323 "abort": false, 00:10:11.323 "seek_hole": false, 00:10:11.323 "seek_data": false, 00:10:11.323 "copy": false, 00:10:11.323 "nvme_iov_md": false 00:10:11.323 }, 00:10:11.323 "memory_domains": [ 00:10:11.323 { 00:10:11.323 "dma_device_id": "system", 00:10:11.323 "dma_device_type": 1 00:10:11.323 }, 00:10:11.323 { 00:10:11.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.323 "dma_device_type": 2 00:10:11.323 }, 00:10:11.323 { 00:10:11.323 "dma_device_id": "system", 00:10:11.323 "dma_device_type": 1 00:10:11.323 }, 00:10:11.323 { 00:10:11.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.323 "dma_device_type": 2 00:10:11.323 }, 00:10:11.323 { 00:10:11.323 "dma_device_id": "system", 00:10:11.323 "dma_device_type": 1 00:10:11.323 }, 00:10:11.323 { 00:10:11.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.323 "dma_device_type": 2 00:10:11.323 } 00:10:11.323 ], 00:10:11.323 "driver_specific": { 00:10:11.323 "raid": { 00:10:11.323 "uuid": "e87fb51b-3de6-430c-b128-82bfafb827d3", 00:10:11.323 "strip_size_kb": 0, 00:10:11.323 "state": "online", 00:10:11.323 "raid_level": "raid1", 00:10:11.323 "superblock": false, 00:10:11.323 "num_base_bdevs": 3, 00:10:11.323 "num_base_bdevs_discovered": 3, 00:10:11.323 "num_base_bdevs_operational": 3, 00:10:11.323 "base_bdevs_list": [ 00:10:11.323 { 00:10:11.323 "name": "BaseBdev1", 00:10:11.323 "uuid": "20c4548c-419c-4b62-ba43-130af0e50f31", 00:10:11.323 "is_configured": true, 00:10:11.323 "data_offset": 0, 00:10:11.323 "data_size": 65536 00:10:11.323 }, 00:10:11.323 { 00:10:11.323 "name": "BaseBdev2", 00:10:11.323 "uuid": "4f961c8b-81b0-45bd-81a7-54b50946f2cd", 00:10:11.323 "is_configured": true, 00:10:11.323 "data_offset": 0, 00:10:11.323 "data_size": 65536 00:10:11.323 }, 00:10:11.323 { 00:10:11.323 "name": "BaseBdev3", 00:10:11.323 "uuid": "b13705a6-3dcb-41e8-9c10-038856a86fd7", 00:10:11.323 "is_configured": true, 00:10:11.323 "data_offset": 0, 00:10:11.323 "data_size": 65536 00:10:11.323 } 00:10:11.323 ] 00:10:11.323 } 00:10:11.323 } 00:10:11.323 }' 00:10:11.323 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.596 BaseBdev2 00:10:11.596 BaseBdev3' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.596 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.596 [2024-12-06 18:07:23.693688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.855 "name": "Existed_Raid", 00:10:11.855 "uuid": "e87fb51b-3de6-430c-b128-82bfafb827d3", 00:10:11.855 "strip_size_kb": 0, 00:10:11.855 "state": "online", 00:10:11.855 "raid_level": "raid1", 00:10:11.855 "superblock": false, 00:10:11.855 "num_base_bdevs": 3, 00:10:11.855 "num_base_bdevs_discovered": 2, 00:10:11.855 "num_base_bdevs_operational": 2, 00:10:11.855 "base_bdevs_list": [ 00:10:11.855 { 00:10:11.855 "name": null, 00:10:11.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.855 "is_configured": false, 00:10:11.855 "data_offset": 0, 00:10:11.855 "data_size": 65536 00:10:11.855 }, 00:10:11.855 { 00:10:11.855 "name": "BaseBdev2", 00:10:11.855 "uuid": "4f961c8b-81b0-45bd-81a7-54b50946f2cd", 00:10:11.855 "is_configured": true, 00:10:11.855 "data_offset": 0, 00:10:11.855 "data_size": 65536 00:10:11.855 }, 00:10:11.855 { 00:10:11.855 "name": "BaseBdev3", 00:10:11.855 "uuid": "b13705a6-3dcb-41e8-9c10-038856a86fd7", 00:10:11.855 "is_configured": true, 00:10:11.855 "data_offset": 0, 00:10:11.855 "data_size": 65536 00:10:11.855 } 00:10:11.855 ] 00:10:11.855 }' 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.855 18:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.114 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.114 [2024-12-06 18:07:24.262343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.372 [2024-12-06 18:07:24.423630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.372 [2024-12-06 18:07:24.423799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.372 [2024-12-06 18:07:24.527856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.372 [2024-12-06 18:07:24.527994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.372 [2024-12-06 18:07:24.528049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.372 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.632 BaseBdev2 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.632 [ 00:10:12.632 { 00:10:12.632 "name": "BaseBdev2", 00:10:12.632 "aliases": [ 00:10:12.632 "f31ad11d-8df7-4c4e-b310-73907060ae0a" 00:10:12.632 ], 00:10:12.632 "product_name": "Malloc disk", 00:10:12.632 "block_size": 512, 00:10:12.632 "num_blocks": 65536, 00:10:12.632 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:12.632 "assigned_rate_limits": { 00:10:12.632 "rw_ios_per_sec": 0, 00:10:12.632 "rw_mbytes_per_sec": 0, 00:10:12.632 "r_mbytes_per_sec": 0, 00:10:12.632 "w_mbytes_per_sec": 0 00:10:12.632 }, 00:10:12.632 "claimed": false, 00:10:12.632 "zoned": false, 00:10:12.632 "supported_io_types": { 00:10:12.632 "read": true, 00:10:12.632 "write": true, 00:10:12.632 "unmap": true, 00:10:12.632 "flush": true, 00:10:12.632 "reset": true, 00:10:12.632 "nvme_admin": false, 00:10:12.632 "nvme_io": false, 00:10:12.632 "nvme_io_md": false, 00:10:12.632 "write_zeroes": true, 00:10:12.632 "zcopy": true, 00:10:12.632 "get_zone_info": false, 00:10:12.632 "zone_management": false, 00:10:12.632 "zone_append": false, 00:10:12.632 "compare": false, 00:10:12.632 "compare_and_write": false, 00:10:12.632 "abort": true, 00:10:12.632 "seek_hole": false, 00:10:12.632 "seek_data": false, 00:10:12.632 "copy": true, 00:10:12.632 "nvme_iov_md": false 00:10:12.632 }, 00:10:12.632 "memory_domains": [ 00:10:12.632 { 00:10:12.632 "dma_device_id": "system", 00:10:12.632 "dma_device_type": 1 00:10:12.632 }, 00:10:12.632 { 00:10:12.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.632 "dma_device_type": 2 00:10:12.632 } 00:10:12.632 ], 00:10:12.632 "driver_specific": {} 00:10:12.632 } 00:10:12.632 ] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.632 BaseBdev3 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.632 [ 00:10:12.632 { 00:10:12.632 "name": "BaseBdev3", 00:10:12.632 "aliases": [ 00:10:12.632 "07d34600-baf4-4ac9-94bd-87f4b23faa67" 00:10:12.632 ], 00:10:12.632 "product_name": "Malloc disk", 00:10:12.632 "block_size": 512, 00:10:12.632 "num_blocks": 65536, 00:10:12.632 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:12.632 "assigned_rate_limits": { 00:10:12.632 "rw_ios_per_sec": 0, 00:10:12.632 "rw_mbytes_per_sec": 0, 00:10:12.632 "r_mbytes_per_sec": 0, 00:10:12.632 "w_mbytes_per_sec": 0 00:10:12.632 }, 00:10:12.632 "claimed": false, 00:10:12.632 "zoned": false, 00:10:12.632 "supported_io_types": { 00:10:12.632 "read": true, 00:10:12.632 "write": true, 00:10:12.632 "unmap": true, 00:10:12.632 "flush": true, 00:10:12.632 "reset": true, 00:10:12.632 "nvme_admin": false, 00:10:12.632 "nvme_io": false, 00:10:12.632 "nvme_io_md": false, 00:10:12.632 "write_zeroes": true, 00:10:12.632 "zcopy": true, 00:10:12.632 "get_zone_info": false, 00:10:12.632 "zone_management": false, 00:10:12.632 "zone_append": false, 00:10:12.632 "compare": false, 00:10:12.632 "compare_and_write": false, 00:10:12.632 "abort": true, 00:10:12.632 "seek_hole": false, 00:10:12.632 "seek_data": false, 00:10:12.632 "copy": true, 00:10:12.632 "nvme_iov_md": false 00:10:12.632 }, 00:10:12.632 "memory_domains": [ 00:10:12.632 { 00:10:12.632 "dma_device_id": "system", 00:10:12.632 "dma_device_type": 1 00:10:12.632 }, 00:10:12.632 { 00:10:12.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.632 "dma_device_type": 2 00:10:12.632 } 00:10:12.632 ], 00:10:12.632 "driver_specific": {} 00:10:12.632 } 00:10:12.632 ] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.632 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.633 [2024-12-06 18:07:24.755173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.633 [2024-12-06 18:07:24.755293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.633 [2024-12-06 18:07:24.755350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.633 [2024-12-06 18:07:24.757600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.633 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.891 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.891 "name": "Existed_Raid", 00:10:12.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.891 "strip_size_kb": 0, 00:10:12.891 "state": "configuring", 00:10:12.891 "raid_level": "raid1", 00:10:12.891 "superblock": false, 00:10:12.891 "num_base_bdevs": 3, 00:10:12.891 "num_base_bdevs_discovered": 2, 00:10:12.891 "num_base_bdevs_operational": 3, 00:10:12.891 "base_bdevs_list": [ 00:10:12.891 { 00:10:12.891 "name": "BaseBdev1", 00:10:12.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.891 "is_configured": false, 00:10:12.891 "data_offset": 0, 00:10:12.891 "data_size": 0 00:10:12.891 }, 00:10:12.891 { 00:10:12.891 "name": "BaseBdev2", 00:10:12.891 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:12.891 "is_configured": true, 00:10:12.891 "data_offset": 0, 00:10:12.891 "data_size": 65536 00:10:12.891 }, 00:10:12.891 { 00:10:12.891 "name": "BaseBdev3", 00:10:12.891 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:12.891 "is_configured": true, 00:10:12.891 "data_offset": 0, 00:10:12.891 "data_size": 65536 00:10:12.891 } 00:10:12.891 ] 00:10:12.891 }' 00:10:12.891 18:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.891 18:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.149 [2024-12-06 18:07:25.186448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.149 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.149 "name": "Existed_Raid", 00:10:13.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.149 "strip_size_kb": 0, 00:10:13.150 "state": "configuring", 00:10:13.150 "raid_level": "raid1", 00:10:13.150 "superblock": false, 00:10:13.150 "num_base_bdevs": 3, 00:10:13.150 "num_base_bdevs_discovered": 1, 00:10:13.150 "num_base_bdevs_operational": 3, 00:10:13.150 "base_bdevs_list": [ 00:10:13.150 { 00:10:13.150 "name": "BaseBdev1", 00:10:13.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.150 "is_configured": false, 00:10:13.150 "data_offset": 0, 00:10:13.150 "data_size": 0 00:10:13.150 }, 00:10:13.150 { 00:10:13.150 "name": null, 00:10:13.150 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:13.150 "is_configured": false, 00:10:13.150 "data_offset": 0, 00:10:13.150 "data_size": 65536 00:10:13.150 }, 00:10:13.150 { 00:10:13.150 "name": "BaseBdev3", 00:10:13.150 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:13.150 "is_configured": true, 00:10:13.150 "data_offset": 0, 00:10:13.150 "data_size": 65536 00:10:13.150 } 00:10:13.150 ] 00:10:13.150 }' 00:10:13.150 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.150 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.717 [2024-12-06 18:07:25.759326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.717 BaseBdev1 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.717 [ 00:10:13.717 { 00:10:13.717 "name": "BaseBdev1", 00:10:13.717 "aliases": [ 00:10:13.717 "4e6adbde-a045-40e8-b7e1-2c8f620df358" 00:10:13.717 ], 00:10:13.717 "product_name": "Malloc disk", 00:10:13.717 "block_size": 512, 00:10:13.717 "num_blocks": 65536, 00:10:13.717 "uuid": "4e6adbde-a045-40e8-b7e1-2c8f620df358", 00:10:13.717 "assigned_rate_limits": { 00:10:13.717 "rw_ios_per_sec": 0, 00:10:13.717 "rw_mbytes_per_sec": 0, 00:10:13.717 "r_mbytes_per_sec": 0, 00:10:13.717 "w_mbytes_per_sec": 0 00:10:13.717 }, 00:10:13.717 "claimed": true, 00:10:13.717 "claim_type": "exclusive_write", 00:10:13.717 "zoned": false, 00:10:13.717 "supported_io_types": { 00:10:13.717 "read": true, 00:10:13.717 "write": true, 00:10:13.717 "unmap": true, 00:10:13.717 "flush": true, 00:10:13.717 "reset": true, 00:10:13.717 "nvme_admin": false, 00:10:13.717 "nvme_io": false, 00:10:13.717 "nvme_io_md": false, 00:10:13.717 "write_zeroes": true, 00:10:13.717 "zcopy": true, 00:10:13.717 "get_zone_info": false, 00:10:13.717 "zone_management": false, 00:10:13.717 "zone_append": false, 00:10:13.717 "compare": false, 00:10:13.717 "compare_and_write": false, 00:10:13.717 "abort": true, 00:10:13.717 "seek_hole": false, 00:10:13.717 "seek_data": false, 00:10:13.717 "copy": true, 00:10:13.717 "nvme_iov_md": false 00:10:13.717 }, 00:10:13.717 "memory_domains": [ 00:10:13.717 { 00:10:13.717 "dma_device_id": "system", 00:10:13.717 "dma_device_type": 1 00:10:13.717 }, 00:10:13.717 { 00:10:13.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.717 "dma_device_type": 2 00:10:13.717 } 00:10:13.717 ], 00:10:13.717 "driver_specific": {} 00:10:13.717 } 00:10:13.717 ] 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.717 "name": "Existed_Raid", 00:10:13.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.717 "strip_size_kb": 0, 00:10:13.717 "state": "configuring", 00:10:13.717 "raid_level": "raid1", 00:10:13.717 "superblock": false, 00:10:13.717 "num_base_bdevs": 3, 00:10:13.717 "num_base_bdevs_discovered": 2, 00:10:13.717 "num_base_bdevs_operational": 3, 00:10:13.717 "base_bdevs_list": [ 00:10:13.717 { 00:10:13.717 "name": "BaseBdev1", 00:10:13.717 "uuid": "4e6adbde-a045-40e8-b7e1-2c8f620df358", 00:10:13.717 "is_configured": true, 00:10:13.717 "data_offset": 0, 00:10:13.717 "data_size": 65536 00:10:13.717 }, 00:10:13.717 { 00:10:13.717 "name": null, 00:10:13.717 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:13.717 "is_configured": false, 00:10:13.717 "data_offset": 0, 00:10:13.717 "data_size": 65536 00:10:13.717 }, 00:10:13.717 { 00:10:13.717 "name": "BaseBdev3", 00:10:13.717 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:13.717 "is_configured": true, 00:10:13.717 "data_offset": 0, 00:10:13.717 "data_size": 65536 00:10:13.717 } 00:10:13.717 ] 00:10:13.717 }' 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.717 18:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.285 [2024-12-06 18:07:26.326465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.285 "name": "Existed_Raid", 00:10:14.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.285 "strip_size_kb": 0, 00:10:14.285 "state": "configuring", 00:10:14.285 "raid_level": "raid1", 00:10:14.285 "superblock": false, 00:10:14.285 "num_base_bdevs": 3, 00:10:14.285 "num_base_bdevs_discovered": 1, 00:10:14.285 "num_base_bdevs_operational": 3, 00:10:14.285 "base_bdevs_list": [ 00:10:14.285 { 00:10:14.285 "name": "BaseBdev1", 00:10:14.285 "uuid": "4e6adbde-a045-40e8-b7e1-2c8f620df358", 00:10:14.285 "is_configured": true, 00:10:14.285 "data_offset": 0, 00:10:14.285 "data_size": 65536 00:10:14.285 }, 00:10:14.285 { 00:10:14.285 "name": null, 00:10:14.285 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:14.285 "is_configured": false, 00:10:14.285 "data_offset": 0, 00:10:14.285 "data_size": 65536 00:10:14.285 }, 00:10:14.285 { 00:10:14.285 "name": null, 00:10:14.285 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:14.285 "is_configured": false, 00:10:14.285 "data_offset": 0, 00:10:14.285 "data_size": 65536 00:10:14.285 } 00:10:14.285 ] 00:10:14.285 }' 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.285 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.900 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.900 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.900 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.900 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.900 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.900 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:14.900 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:14.900 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.900 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.900 [2024-12-06 18:07:26.841619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.901 "name": "Existed_Raid", 00:10:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.901 "strip_size_kb": 0, 00:10:14.901 "state": "configuring", 00:10:14.901 "raid_level": "raid1", 00:10:14.901 "superblock": false, 00:10:14.901 "num_base_bdevs": 3, 00:10:14.901 "num_base_bdevs_discovered": 2, 00:10:14.901 "num_base_bdevs_operational": 3, 00:10:14.901 "base_bdevs_list": [ 00:10:14.901 { 00:10:14.901 "name": "BaseBdev1", 00:10:14.901 "uuid": "4e6adbde-a045-40e8-b7e1-2c8f620df358", 00:10:14.901 "is_configured": true, 00:10:14.901 "data_offset": 0, 00:10:14.901 "data_size": 65536 00:10:14.901 }, 00:10:14.901 { 00:10:14.901 "name": null, 00:10:14.901 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:14.901 "is_configured": false, 00:10:14.901 "data_offset": 0, 00:10:14.901 "data_size": 65536 00:10:14.901 }, 00:10:14.901 { 00:10:14.901 "name": "BaseBdev3", 00:10:14.901 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:14.901 "is_configured": true, 00:10:14.901 "data_offset": 0, 00:10:14.901 "data_size": 65536 00:10:14.901 } 00:10:14.901 ] 00:10:14.901 }' 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.901 18:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.158 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.158 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.158 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.158 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.158 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.417 [2024-12-06 18:07:27.352779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.417 "name": "Existed_Raid", 00:10:15.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.417 "strip_size_kb": 0, 00:10:15.417 "state": "configuring", 00:10:15.417 "raid_level": "raid1", 00:10:15.417 "superblock": false, 00:10:15.417 "num_base_bdevs": 3, 00:10:15.417 "num_base_bdevs_discovered": 1, 00:10:15.417 "num_base_bdevs_operational": 3, 00:10:15.417 "base_bdevs_list": [ 00:10:15.417 { 00:10:15.417 "name": null, 00:10:15.417 "uuid": "4e6adbde-a045-40e8-b7e1-2c8f620df358", 00:10:15.417 "is_configured": false, 00:10:15.417 "data_offset": 0, 00:10:15.417 "data_size": 65536 00:10:15.417 }, 00:10:15.417 { 00:10:15.417 "name": null, 00:10:15.417 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:15.417 "is_configured": false, 00:10:15.417 "data_offset": 0, 00:10:15.417 "data_size": 65536 00:10:15.417 }, 00:10:15.417 { 00:10:15.417 "name": "BaseBdev3", 00:10:15.417 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:15.417 "is_configured": true, 00:10:15.417 "data_offset": 0, 00:10:15.417 "data_size": 65536 00:10:15.417 } 00:10:15.417 ] 00:10:15.417 }' 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.417 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.983 [2024-12-06 18:07:27.949186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.983 18:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.984 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.984 "name": "Existed_Raid", 00:10:15.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.984 "strip_size_kb": 0, 00:10:15.984 "state": "configuring", 00:10:15.984 "raid_level": "raid1", 00:10:15.984 "superblock": false, 00:10:15.984 "num_base_bdevs": 3, 00:10:15.984 "num_base_bdevs_discovered": 2, 00:10:15.984 "num_base_bdevs_operational": 3, 00:10:15.984 "base_bdevs_list": [ 00:10:15.984 { 00:10:15.984 "name": null, 00:10:15.984 "uuid": "4e6adbde-a045-40e8-b7e1-2c8f620df358", 00:10:15.984 "is_configured": false, 00:10:15.984 "data_offset": 0, 00:10:15.984 "data_size": 65536 00:10:15.984 }, 00:10:15.984 { 00:10:15.984 "name": "BaseBdev2", 00:10:15.984 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:15.984 "is_configured": true, 00:10:15.984 "data_offset": 0, 00:10:15.984 "data_size": 65536 00:10:15.984 }, 00:10:15.984 { 00:10:15.984 "name": "BaseBdev3", 00:10:15.984 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:15.984 "is_configured": true, 00:10:15.984 "data_offset": 0, 00:10:15.984 "data_size": 65536 00:10:15.984 } 00:10:15.984 ] 00:10:15.984 }' 00:10:15.984 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.984 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4e6adbde-a045-40e8-b7e1-2c8f620df358 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 [2024-12-06 18:07:28.555938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:16.549 NewBaseBdev 00:10:16.549 [2024-12-06 18:07:28.556155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:16.549 [2024-12-06 18:07:28.556172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:16.549 [2024-12-06 18:07:28.556486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:16.549 [2024-12-06 18:07:28.556671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:16.549 [2024-12-06 18:07:28.556685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:16.549 [2024-12-06 18:07:28.556981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.549 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 [ 00:10:16.549 { 00:10:16.549 "name": "NewBaseBdev", 00:10:16.549 "aliases": [ 00:10:16.549 "4e6adbde-a045-40e8-b7e1-2c8f620df358" 00:10:16.549 ], 00:10:16.549 "product_name": "Malloc disk", 00:10:16.549 "block_size": 512, 00:10:16.549 "num_blocks": 65536, 00:10:16.549 "uuid": "4e6adbde-a045-40e8-b7e1-2c8f620df358", 00:10:16.549 "assigned_rate_limits": { 00:10:16.549 "rw_ios_per_sec": 0, 00:10:16.549 "rw_mbytes_per_sec": 0, 00:10:16.549 "r_mbytes_per_sec": 0, 00:10:16.549 "w_mbytes_per_sec": 0 00:10:16.549 }, 00:10:16.549 "claimed": true, 00:10:16.549 "claim_type": "exclusive_write", 00:10:16.549 "zoned": false, 00:10:16.549 "supported_io_types": { 00:10:16.549 "read": true, 00:10:16.550 "write": true, 00:10:16.550 "unmap": true, 00:10:16.550 "flush": true, 00:10:16.550 "reset": true, 00:10:16.550 "nvme_admin": false, 00:10:16.550 "nvme_io": false, 00:10:16.550 "nvme_io_md": false, 00:10:16.550 "write_zeroes": true, 00:10:16.550 "zcopy": true, 00:10:16.550 "get_zone_info": false, 00:10:16.550 "zone_management": false, 00:10:16.550 "zone_append": false, 00:10:16.550 "compare": false, 00:10:16.550 "compare_and_write": false, 00:10:16.550 "abort": true, 00:10:16.550 "seek_hole": false, 00:10:16.550 "seek_data": false, 00:10:16.550 "copy": true, 00:10:16.550 "nvme_iov_md": false 00:10:16.550 }, 00:10:16.550 "memory_domains": [ 00:10:16.550 { 00:10:16.550 "dma_device_id": "system", 00:10:16.550 "dma_device_type": 1 00:10:16.550 }, 00:10:16.550 { 00:10:16.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.550 "dma_device_type": 2 00:10:16.550 } 00:10:16.550 ], 00:10:16.550 "driver_specific": {} 00:10:16.550 } 00:10:16.550 ] 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.550 "name": "Existed_Raid", 00:10:16.550 "uuid": "56c9147f-0e7b-4b09-b64b-dce476f3f5d1", 00:10:16.550 "strip_size_kb": 0, 00:10:16.550 "state": "online", 00:10:16.550 "raid_level": "raid1", 00:10:16.550 "superblock": false, 00:10:16.550 "num_base_bdevs": 3, 00:10:16.550 "num_base_bdevs_discovered": 3, 00:10:16.550 "num_base_bdevs_operational": 3, 00:10:16.550 "base_bdevs_list": [ 00:10:16.550 { 00:10:16.550 "name": "NewBaseBdev", 00:10:16.550 "uuid": "4e6adbde-a045-40e8-b7e1-2c8f620df358", 00:10:16.550 "is_configured": true, 00:10:16.550 "data_offset": 0, 00:10:16.550 "data_size": 65536 00:10:16.550 }, 00:10:16.550 { 00:10:16.550 "name": "BaseBdev2", 00:10:16.550 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:16.550 "is_configured": true, 00:10:16.550 "data_offset": 0, 00:10:16.550 "data_size": 65536 00:10:16.550 }, 00:10:16.550 { 00:10:16.550 "name": "BaseBdev3", 00:10:16.550 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:16.550 "is_configured": true, 00:10:16.550 "data_offset": 0, 00:10:16.550 "data_size": 65536 00:10:16.550 } 00:10:16.550 ] 00:10:16.550 }' 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.550 18:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.122 [2024-12-06 18:07:29.019718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.122 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.122 "name": "Existed_Raid", 00:10:17.122 "aliases": [ 00:10:17.122 "56c9147f-0e7b-4b09-b64b-dce476f3f5d1" 00:10:17.122 ], 00:10:17.122 "product_name": "Raid Volume", 00:10:17.122 "block_size": 512, 00:10:17.122 "num_blocks": 65536, 00:10:17.122 "uuid": "56c9147f-0e7b-4b09-b64b-dce476f3f5d1", 00:10:17.122 "assigned_rate_limits": { 00:10:17.122 "rw_ios_per_sec": 0, 00:10:17.122 "rw_mbytes_per_sec": 0, 00:10:17.122 "r_mbytes_per_sec": 0, 00:10:17.122 "w_mbytes_per_sec": 0 00:10:17.122 }, 00:10:17.122 "claimed": false, 00:10:17.122 "zoned": false, 00:10:17.122 "supported_io_types": { 00:10:17.122 "read": true, 00:10:17.122 "write": true, 00:10:17.122 "unmap": false, 00:10:17.122 "flush": false, 00:10:17.122 "reset": true, 00:10:17.122 "nvme_admin": false, 00:10:17.122 "nvme_io": false, 00:10:17.122 "nvme_io_md": false, 00:10:17.122 "write_zeroes": true, 00:10:17.122 "zcopy": false, 00:10:17.122 "get_zone_info": false, 00:10:17.122 "zone_management": false, 00:10:17.122 "zone_append": false, 00:10:17.122 "compare": false, 00:10:17.122 "compare_and_write": false, 00:10:17.122 "abort": false, 00:10:17.122 "seek_hole": false, 00:10:17.122 "seek_data": false, 00:10:17.122 "copy": false, 00:10:17.122 "nvme_iov_md": false 00:10:17.122 }, 00:10:17.122 "memory_domains": [ 00:10:17.122 { 00:10:17.122 "dma_device_id": "system", 00:10:17.122 "dma_device_type": 1 00:10:17.122 }, 00:10:17.122 { 00:10:17.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.122 "dma_device_type": 2 00:10:17.122 }, 00:10:17.122 { 00:10:17.122 "dma_device_id": "system", 00:10:17.122 "dma_device_type": 1 00:10:17.122 }, 00:10:17.122 { 00:10:17.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.122 "dma_device_type": 2 00:10:17.122 }, 00:10:17.122 { 00:10:17.122 "dma_device_id": "system", 00:10:17.122 "dma_device_type": 1 00:10:17.122 }, 00:10:17.122 { 00:10:17.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.122 "dma_device_type": 2 00:10:17.122 } 00:10:17.122 ], 00:10:17.122 "driver_specific": { 00:10:17.122 "raid": { 00:10:17.122 "uuid": "56c9147f-0e7b-4b09-b64b-dce476f3f5d1", 00:10:17.122 "strip_size_kb": 0, 00:10:17.122 "state": "online", 00:10:17.122 "raid_level": "raid1", 00:10:17.122 "superblock": false, 00:10:17.122 "num_base_bdevs": 3, 00:10:17.122 "num_base_bdevs_discovered": 3, 00:10:17.122 "num_base_bdevs_operational": 3, 00:10:17.122 "base_bdevs_list": [ 00:10:17.122 { 00:10:17.123 "name": "NewBaseBdev", 00:10:17.123 "uuid": "4e6adbde-a045-40e8-b7e1-2c8f620df358", 00:10:17.123 "is_configured": true, 00:10:17.123 "data_offset": 0, 00:10:17.123 "data_size": 65536 00:10:17.123 }, 00:10:17.123 { 00:10:17.123 "name": "BaseBdev2", 00:10:17.123 "uuid": "f31ad11d-8df7-4c4e-b310-73907060ae0a", 00:10:17.123 "is_configured": true, 00:10:17.123 "data_offset": 0, 00:10:17.123 "data_size": 65536 00:10:17.123 }, 00:10:17.123 { 00:10:17.123 "name": "BaseBdev3", 00:10:17.123 "uuid": "07d34600-baf4-4ac9-94bd-87f4b23faa67", 00:10:17.123 "is_configured": true, 00:10:17.123 "data_offset": 0, 00:10:17.123 "data_size": 65536 00:10:17.123 } 00:10:17.123 ] 00:10:17.123 } 00:10:17.123 } 00:10:17.123 }' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:17.123 BaseBdev2 00:10:17.123 BaseBdev3' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.123 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.381 [2024-12-06 18:07:29.310904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.381 [2024-12-06 18:07:29.311015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.381 [2024-12-06 18:07:29.311161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.381 [2024-12-06 18:07:29.311562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.381 [2024-12-06 18:07:29.311629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67832 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67832 ']' 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67832 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67832 00:10:17.381 killing process with pid 67832 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67832' 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67832 00:10:17.381 18:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67832 00:10:17.381 [2024-12-06 18:07:29.357974] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.638 [2024-12-06 18:07:29.720812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.013 ************************************ 00:10:19.013 END TEST raid_state_function_test 00:10:19.013 ************************************ 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.013 00:10:19.013 real 0m11.160s 00:10:19.013 user 0m17.636s 00:10:19.013 sys 0m1.862s 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.013 18:07:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:19.013 18:07:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:19.013 18:07:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.013 18:07:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.013 ************************************ 00:10:19.013 START TEST raid_state_function_test_sb 00:10:19.013 ************************************ 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:19.013 Process raid pid: 68459 00:10:19.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68459 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68459' 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68459 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68459 ']' 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.013 18:07:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.272 [2024-12-06 18:07:31.192394] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:10:19.272 [2024-12-06 18:07:31.192660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.272 [2024-12-06 18:07:31.361689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.531 [2024-12-06 18:07:31.496948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.790 [2024-12-06 18:07:31.735543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.790 [2024-12-06 18:07:31.735692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.048 [2024-12-06 18:07:32.159908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:20.048 [2024-12-06 18:07:32.160025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:20.048 [2024-12-06 18:07:32.160080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:20.048 [2024-12-06 18:07:32.160112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:20.048 [2024-12-06 18:07:32.160135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:20.048 [2024-12-06 18:07:32.160160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.048 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.049 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.049 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.049 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.049 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.049 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.049 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.049 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.307 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.307 "name": "Existed_Raid", 00:10:20.307 "uuid": "1641c5fa-53f1-41d4-942c-85f42af2badb", 00:10:20.307 "strip_size_kb": 0, 00:10:20.307 "state": "configuring", 00:10:20.307 "raid_level": "raid1", 00:10:20.307 "superblock": true, 00:10:20.307 "num_base_bdevs": 3, 00:10:20.307 "num_base_bdevs_discovered": 0, 00:10:20.307 "num_base_bdevs_operational": 3, 00:10:20.307 "base_bdevs_list": [ 00:10:20.307 { 00:10:20.307 "name": "BaseBdev1", 00:10:20.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.307 "is_configured": false, 00:10:20.307 "data_offset": 0, 00:10:20.307 "data_size": 0 00:10:20.307 }, 00:10:20.307 { 00:10:20.307 "name": "BaseBdev2", 00:10:20.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.307 "is_configured": false, 00:10:20.307 "data_offset": 0, 00:10:20.307 "data_size": 0 00:10:20.307 }, 00:10:20.307 { 00:10:20.307 "name": "BaseBdev3", 00:10:20.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.307 "is_configured": false, 00:10:20.307 "data_offset": 0, 00:10:20.307 "data_size": 0 00:10:20.307 } 00:10:20.307 ] 00:10:20.307 }' 00:10:20.307 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.307 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.566 [2024-12-06 18:07:32.587438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:20.566 [2024-12-06 18:07:32.587561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.566 [2024-12-06 18:07:32.599455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:20.566 [2024-12-06 18:07:32.599575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:20.566 [2024-12-06 18:07:32.599610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:20.566 [2024-12-06 18:07:32.599639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:20.566 [2024-12-06 18:07:32.599661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:20.566 [2024-12-06 18:07:32.599712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.566 [2024-12-06 18:07:32.652107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.566 BaseBdev1 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.566 [ 00:10:20.566 { 00:10:20.566 "name": "BaseBdev1", 00:10:20.566 "aliases": [ 00:10:20.566 "a26a4a0e-32fa-4f88-aa4b-c2fb956a640b" 00:10:20.566 ], 00:10:20.566 "product_name": "Malloc disk", 00:10:20.566 "block_size": 512, 00:10:20.566 "num_blocks": 65536, 00:10:20.566 "uuid": "a26a4a0e-32fa-4f88-aa4b-c2fb956a640b", 00:10:20.566 "assigned_rate_limits": { 00:10:20.566 "rw_ios_per_sec": 0, 00:10:20.566 "rw_mbytes_per_sec": 0, 00:10:20.566 "r_mbytes_per_sec": 0, 00:10:20.566 "w_mbytes_per_sec": 0 00:10:20.566 }, 00:10:20.566 "claimed": true, 00:10:20.566 "claim_type": "exclusive_write", 00:10:20.566 "zoned": false, 00:10:20.566 "supported_io_types": { 00:10:20.566 "read": true, 00:10:20.566 "write": true, 00:10:20.566 "unmap": true, 00:10:20.566 "flush": true, 00:10:20.566 "reset": true, 00:10:20.566 "nvme_admin": false, 00:10:20.566 "nvme_io": false, 00:10:20.566 "nvme_io_md": false, 00:10:20.566 "write_zeroes": true, 00:10:20.566 "zcopy": true, 00:10:20.566 "get_zone_info": false, 00:10:20.566 "zone_management": false, 00:10:20.566 "zone_append": false, 00:10:20.566 "compare": false, 00:10:20.566 "compare_and_write": false, 00:10:20.566 "abort": true, 00:10:20.566 "seek_hole": false, 00:10:20.566 "seek_data": false, 00:10:20.566 "copy": true, 00:10:20.566 "nvme_iov_md": false 00:10:20.566 }, 00:10:20.566 "memory_domains": [ 00:10:20.566 { 00:10:20.566 "dma_device_id": "system", 00:10:20.566 "dma_device_type": 1 00:10:20.566 }, 00:10:20.566 { 00:10:20.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.566 "dma_device_type": 2 00:10:20.566 } 00:10:20.566 ], 00:10:20.566 "driver_specific": {} 00:10:20.566 } 00:10:20.566 ] 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.566 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.567 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.567 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.567 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.567 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.825 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.825 "name": "Existed_Raid", 00:10:20.825 "uuid": "e7dd39bb-eaa5-4d83-b999-23e295b00c8d", 00:10:20.825 "strip_size_kb": 0, 00:10:20.825 "state": "configuring", 00:10:20.825 "raid_level": "raid1", 00:10:20.825 "superblock": true, 00:10:20.825 "num_base_bdevs": 3, 00:10:20.825 "num_base_bdevs_discovered": 1, 00:10:20.825 "num_base_bdevs_operational": 3, 00:10:20.825 "base_bdevs_list": [ 00:10:20.825 { 00:10:20.825 "name": "BaseBdev1", 00:10:20.825 "uuid": "a26a4a0e-32fa-4f88-aa4b-c2fb956a640b", 00:10:20.825 "is_configured": true, 00:10:20.825 "data_offset": 2048, 00:10:20.825 "data_size": 63488 00:10:20.825 }, 00:10:20.825 { 00:10:20.825 "name": "BaseBdev2", 00:10:20.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.825 "is_configured": false, 00:10:20.825 "data_offset": 0, 00:10:20.825 "data_size": 0 00:10:20.825 }, 00:10:20.825 { 00:10:20.825 "name": "BaseBdev3", 00:10:20.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.825 "is_configured": false, 00:10:20.825 "data_offset": 0, 00:10:20.825 "data_size": 0 00:10:20.825 } 00:10:20.825 ] 00:10:20.825 }' 00:10:20.825 18:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.825 18:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.085 [2024-12-06 18:07:33.151410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.085 [2024-12-06 18:07:33.151563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.085 [2024-12-06 18:07:33.163478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.085 [2024-12-06 18:07:33.165682] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.085 [2024-12-06 18:07:33.165786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.085 [2024-12-06 18:07:33.165822] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.085 [2024-12-06 18:07:33.165850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.085 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.086 "name": "Existed_Raid", 00:10:21.086 "uuid": "fb0b863b-b8ca-4f98-9ba5-1b648ca19d86", 00:10:21.086 "strip_size_kb": 0, 00:10:21.086 "state": "configuring", 00:10:21.086 "raid_level": "raid1", 00:10:21.086 "superblock": true, 00:10:21.086 "num_base_bdevs": 3, 00:10:21.086 "num_base_bdevs_discovered": 1, 00:10:21.086 "num_base_bdevs_operational": 3, 00:10:21.086 "base_bdevs_list": [ 00:10:21.086 { 00:10:21.086 "name": "BaseBdev1", 00:10:21.086 "uuid": "a26a4a0e-32fa-4f88-aa4b-c2fb956a640b", 00:10:21.086 "is_configured": true, 00:10:21.086 "data_offset": 2048, 00:10:21.086 "data_size": 63488 00:10:21.086 }, 00:10:21.086 { 00:10:21.086 "name": "BaseBdev2", 00:10:21.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.086 "is_configured": false, 00:10:21.086 "data_offset": 0, 00:10:21.086 "data_size": 0 00:10:21.086 }, 00:10:21.086 { 00:10:21.086 "name": "BaseBdev3", 00:10:21.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.086 "is_configured": false, 00:10:21.086 "data_offset": 0, 00:10:21.086 "data_size": 0 00:10:21.086 } 00:10:21.086 ] 00:10:21.086 }' 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.086 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.654 [2024-12-06 18:07:33.651517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.654 BaseBdev2 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.654 [ 00:10:21.654 { 00:10:21.654 "name": "BaseBdev2", 00:10:21.654 "aliases": [ 00:10:21.654 "2005cf2e-30d7-419f-b810-1a1ec72596bd" 00:10:21.654 ], 00:10:21.654 "product_name": "Malloc disk", 00:10:21.654 "block_size": 512, 00:10:21.654 "num_blocks": 65536, 00:10:21.654 "uuid": "2005cf2e-30d7-419f-b810-1a1ec72596bd", 00:10:21.654 "assigned_rate_limits": { 00:10:21.654 "rw_ios_per_sec": 0, 00:10:21.654 "rw_mbytes_per_sec": 0, 00:10:21.654 "r_mbytes_per_sec": 0, 00:10:21.654 "w_mbytes_per_sec": 0 00:10:21.654 }, 00:10:21.654 "claimed": true, 00:10:21.654 "claim_type": "exclusive_write", 00:10:21.654 "zoned": false, 00:10:21.654 "supported_io_types": { 00:10:21.654 "read": true, 00:10:21.654 "write": true, 00:10:21.654 "unmap": true, 00:10:21.654 "flush": true, 00:10:21.654 "reset": true, 00:10:21.654 "nvme_admin": false, 00:10:21.654 "nvme_io": false, 00:10:21.654 "nvme_io_md": false, 00:10:21.654 "write_zeroes": true, 00:10:21.654 "zcopy": true, 00:10:21.654 "get_zone_info": false, 00:10:21.654 "zone_management": false, 00:10:21.654 "zone_append": false, 00:10:21.654 "compare": false, 00:10:21.654 "compare_and_write": false, 00:10:21.654 "abort": true, 00:10:21.654 "seek_hole": false, 00:10:21.654 "seek_data": false, 00:10:21.654 "copy": true, 00:10:21.654 "nvme_iov_md": false 00:10:21.654 }, 00:10:21.654 "memory_domains": [ 00:10:21.654 { 00:10:21.654 "dma_device_id": "system", 00:10:21.654 "dma_device_type": 1 00:10:21.654 }, 00:10:21.654 { 00:10:21.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.654 "dma_device_type": 2 00:10:21.654 } 00:10:21.654 ], 00:10:21.654 "driver_specific": {} 00:10:21.654 } 00:10:21.654 ] 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.654 "name": "Existed_Raid", 00:10:21.654 "uuid": "fb0b863b-b8ca-4f98-9ba5-1b648ca19d86", 00:10:21.654 "strip_size_kb": 0, 00:10:21.654 "state": "configuring", 00:10:21.654 "raid_level": "raid1", 00:10:21.654 "superblock": true, 00:10:21.654 "num_base_bdevs": 3, 00:10:21.654 "num_base_bdevs_discovered": 2, 00:10:21.654 "num_base_bdevs_operational": 3, 00:10:21.654 "base_bdevs_list": [ 00:10:21.654 { 00:10:21.654 "name": "BaseBdev1", 00:10:21.654 "uuid": "a26a4a0e-32fa-4f88-aa4b-c2fb956a640b", 00:10:21.654 "is_configured": true, 00:10:21.654 "data_offset": 2048, 00:10:21.654 "data_size": 63488 00:10:21.654 }, 00:10:21.654 { 00:10:21.654 "name": "BaseBdev2", 00:10:21.654 "uuid": "2005cf2e-30d7-419f-b810-1a1ec72596bd", 00:10:21.654 "is_configured": true, 00:10:21.654 "data_offset": 2048, 00:10:21.654 "data_size": 63488 00:10:21.654 }, 00:10:21.654 { 00:10:21.654 "name": "BaseBdev3", 00:10:21.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.654 "is_configured": false, 00:10:21.654 "data_offset": 0, 00:10:21.654 "data_size": 0 00:10:21.654 } 00:10:21.654 ] 00:10:21.654 }' 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.654 18:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.222 [2024-12-06 18:07:34.240849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.222 BaseBdev3 00:10:22.222 [2024-12-06 18:07:34.241316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:22.222 [2024-12-06 18:07:34.241345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:22.222 [2024-12-06 18:07:34.241672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:22.222 [2024-12-06 18:07:34.241872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:22.222 [2024-12-06 18:07:34.241883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:22.222 [2024-12-06 18:07:34.242054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.222 [ 00:10:22.222 { 00:10:22.222 "name": "BaseBdev3", 00:10:22.222 "aliases": [ 00:10:22.222 "bc7de93b-5a8a-4e86-a341-883f0d90b322" 00:10:22.222 ], 00:10:22.222 "product_name": "Malloc disk", 00:10:22.222 "block_size": 512, 00:10:22.222 "num_blocks": 65536, 00:10:22.222 "uuid": "bc7de93b-5a8a-4e86-a341-883f0d90b322", 00:10:22.222 "assigned_rate_limits": { 00:10:22.222 "rw_ios_per_sec": 0, 00:10:22.222 "rw_mbytes_per_sec": 0, 00:10:22.222 "r_mbytes_per_sec": 0, 00:10:22.222 "w_mbytes_per_sec": 0 00:10:22.222 }, 00:10:22.222 "claimed": true, 00:10:22.222 "claim_type": "exclusive_write", 00:10:22.222 "zoned": false, 00:10:22.222 "supported_io_types": { 00:10:22.222 "read": true, 00:10:22.222 "write": true, 00:10:22.222 "unmap": true, 00:10:22.222 "flush": true, 00:10:22.222 "reset": true, 00:10:22.222 "nvme_admin": false, 00:10:22.222 "nvme_io": false, 00:10:22.222 "nvme_io_md": false, 00:10:22.222 "write_zeroes": true, 00:10:22.222 "zcopy": true, 00:10:22.222 "get_zone_info": false, 00:10:22.222 "zone_management": false, 00:10:22.222 "zone_append": false, 00:10:22.222 "compare": false, 00:10:22.222 "compare_and_write": false, 00:10:22.222 "abort": true, 00:10:22.222 "seek_hole": false, 00:10:22.222 "seek_data": false, 00:10:22.222 "copy": true, 00:10:22.222 "nvme_iov_md": false 00:10:22.222 }, 00:10:22.222 "memory_domains": [ 00:10:22.222 { 00:10:22.222 "dma_device_id": "system", 00:10:22.222 "dma_device_type": 1 00:10:22.222 }, 00:10:22.222 { 00:10:22.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.222 "dma_device_type": 2 00:10:22.222 } 00:10:22.222 ], 00:10:22.222 "driver_specific": {} 00:10:22.222 } 00:10:22.222 ] 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.222 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.222 "name": "Existed_Raid", 00:10:22.222 "uuid": "fb0b863b-b8ca-4f98-9ba5-1b648ca19d86", 00:10:22.222 "strip_size_kb": 0, 00:10:22.222 "state": "online", 00:10:22.222 "raid_level": "raid1", 00:10:22.222 "superblock": true, 00:10:22.222 "num_base_bdevs": 3, 00:10:22.222 "num_base_bdevs_discovered": 3, 00:10:22.222 "num_base_bdevs_operational": 3, 00:10:22.222 "base_bdevs_list": [ 00:10:22.222 { 00:10:22.222 "name": "BaseBdev1", 00:10:22.222 "uuid": "a26a4a0e-32fa-4f88-aa4b-c2fb956a640b", 00:10:22.223 "is_configured": true, 00:10:22.223 "data_offset": 2048, 00:10:22.223 "data_size": 63488 00:10:22.223 }, 00:10:22.223 { 00:10:22.223 "name": "BaseBdev2", 00:10:22.223 "uuid": "2005cf2e-30d7-419f-b810-1a1ec72596bd", 00:10:22.223 "is_configured": true, 00:10:22.223 "data_offset": 2048, 00:10:22.223 "data_size": 63488 00:10:22.223 }, 00:10:22.223 { 00:10:22.223 "name": "BaseBdev3", 00:10:22.223 "uuid": "bc7de93b-5a8a-4e86-a341-883f0d90b322", 00:10:22.223 "is_configured": true, 00:10:22.223 "data_offset": 2048, 00:10:22.223 "data_size": 63488 00:10:22.223 } 00:10:22.223 ] 00:10:22.223 }' 00:10:22.223 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.223 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.788 [2024-12-06 18:07:34.756469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.788 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.788 "name": "Existed_Raid", 00:10:22.788 "aliases": [ 00:10:22.788 "fb0b863b-b8ca-4f98-9ba5-1b648ca19d86" 00:10:22.788 ], 00:10:22.788 "product_name": "Raid Volume", 00:10:22.788 "block_size": 512, 00:10:22.788 "num_blocks": 63488, 00:10:22.788 "uuid": "fb0b863b-b8ca-4f98-9ba5-1b648ca19d86", 00:10:22.788 "assigned_rate_limits": { 00:10:22.788 "rw_ios_per_sec": 0, 00:10:22.789 "rw_mbytes_per_sec": 0, 00:10:22.789 "r_mbytes_per_sec": 0, 00:10:22.789 "w_mbytes_per_sec": 0 00:10:22.789 }, 00:10:22.789 "claimed": false, 00:10:22.789 "zoned": false, 00:10:22.789 "supported_io_types": { 00:10:22.789 "read": true, 00:10:22.789 "write": true, 00:10:22.789 "unmap": false, 00:10:22.789 "flush": false, 00:10:22.789 "reset": true, 00:10:22.789 "nvme_admin": false, 00:10:22.789 "nvme_io": false, 00:10:22.789 "nvme_io_md": false, 00:10:22.789 "write_zeroes": true, 00:10:22.789 "zcopy": false, 00:10:22.789 "get_zone_info": false, 00:10:22.789 "zone_management": false, 00:10:22.789 "zone_append": false, 00:10:22.789 "compare": false, 00:10:22.789 "compare_and_write": false, 00:10:22.789 "abort": false, 00:10:22.789 "seek_hole": false, 00:10:22.789 "seek_data": false, 00:10:22.789 "copy": false, 00:10:22.789 "nvme_iov_md": false 00:10:22.789 }, 00:10:22.789 "memory_domains": [ 00:10:22.789 { 00:10:22.789 "dma_device_id": "system", 00:10:22.789 "dma_device_type": 1 00:10:22.789 }, 00:10:22.789 { 00:10:22.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.789 "dma_device_type": 2 00:10:22.789 }, 00:10:22.789 { 00:10:22.789 "dma_device_id": "system", 00:10:22.789 "dma_device_type": 1 00:10:22.789 }, 00:10:22.789 { 00:10:22.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.789 "dma_device_type": 2 00:10:22.789 }, 00:10:22.789 { 00:10:22.789 "dma_device_id": "system", 00:10:22.789 "dma_device_type": 1 00:10:22.789 }, 00:10:22.789 { 00:10:22.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.789 "dma_device_type": 2 00:10:22.789 } 00:10:22.789 ], 00:10:22.789 "driver_specific": { 00:10:22.789 "raid": { 00:10:22.789 "uuid": "fb0b863b-b8ca-4f98-9ba5-1b648ca19d86", 00:10:22.789 "strip_size_kb": 0, 00:10:22.789 "state": "online", 00:10:22.789 "raid_level": "raid1", 00:10:22.789 "superblock": true, 00:10:22.789 "num_base_bdevs": 3, 00:10:22.789 "num_base_bdevs_discovered": 3, 00:10:22.789 "num_base_bdevs_operational": 3, 00:10:22.789 "base_bdevs_list": [ 00:10:22.789 { 00:10:22.789 "name": "BaseBdev1", 00:10:22.789 "uuid": "a26a4a0e-32fa-4f88-aa4b-c2fb956a640b", 00:10:22.789 "is_configured": true, 00:10:22.789 "data_offset": 2048, 00:10:22.789 "data_size": 63488 00:10:22.789 }, 00:10:22.789 { 00:10:22.789 "name": "BaseBdev2", 00:10:22.789 "uuid": "2005cf2e-30d7-419f-b810-1a1ec72596bd", 00:10:22.789 "is_configured": true, 00:10:22.789 "data_offset": 2048, 00:10:22.789 "data_size": 63488 00:10:22.789 }, 00:10:22.789 { 00:10:22.789 "name": "BaseBdev3", 00:10:22.789 "uuid": "bc7de93b-5a8a-4e86-a341-883f0d90b322", 00:10:22.789 "is_configured": true, 00:10:22.789 "data_offset": 2048, 00:10:22.789 "data_size": 63488 00:10:22.789 } 00:10:22.789 ] 00:10:22.789 } 00:10:22.789 } 00:10:22.789 }' 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:22.789 BaseBdev2 00:10:22.789 BaseBdev3' 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.789 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.047 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.047 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.047 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.048 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.048 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.048 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:23.048 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.048 18:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.048 18:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.048 [2024-12-06 18:07:35.027714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.048 "name": "Existed_Raid", 00:10:23.048 "uuid": "fb0b863b-b8ca-4f98-9ba5-1b648ca19d86", 00:10:23.048 "strip_size_kb": 0, 00:10:23.048 "state": "online", 00:10:23.048 "raid_level": "raid1", 00:10:23.048 "superblock": true, 00:10:23.048 "num_base_bdevs": 3, 00:10:23.048 "num_base_bdevs_discovered": 2, 00:10:23.048 "num_base_bdevs_operational": 2, 00:10:23.048 "base_bdevs_list": [ 00:10:23.048 { 00:10:23.048 "name": null, 00:10:23.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.048 "is_configured": false, 00:10:23.048 "data_offset": 0, 00:10:23.048 "data_size": 63488 00:10:23.048 }, 00:10:23.048 { 00:10:23.048 "name": "BaseBdev2", 00:10:23.048 "uuid": "2005cf2e-30d7-419f-b810-1a1ec72596bd", 00:10:23.048 "is_configured": true, 00:10:23.048 "data_offset": 2048, 00:10:23.048 "data_size": 63488 00:10:23.048 }, 00:10:23.048 { 00:10:23.048 "name": "BaseBdev3", 00:10:23.048 "uuid": "bc7de93b-5a8a-4e86-a341-883f0d90b322", 00:10:23.048 "is_configured": true, 00:10:23.048 "data_offset": 2048, 00:10:23.048 "data_size": 63488 00:10:23.048 } 00:10:23.048 ] 00:10:23.048 }' 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.048 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.614 [2024-12-06 18:07:35.637754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.614 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.873 [2024-12-06 18:07:35.807292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:23.873 [2024-12-06 18:07:35.807491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.873 [2024-12-06 18:07:35.922654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.873 [2024-12-06 18:07:35.922810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.873 [2024-12-06 18:07:35.922865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.873 18:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.873 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:23.873 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:23.873 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:23.873 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:23.873 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:23.873 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:23.873 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.873 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.131 BaseBdev2 00:10:24.131 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.132 [ 00:10:24.132 { 00:10:24.132 "name": "BaseBdev2", 00:10:24.132 "aliases": [ 00:10:24.132 "8e258748-ea40-43ec-829b-9efb0a2d00ad" 00:10:24.132 ], 00:10:24.132 "product_name": "Malloc disk", 00:10:24.132 "block_size": 512, 00:10:24.132 "num_blocks": 65536, 00:10:24.132 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:24.132 "assigned_rate_limits": { 00:10:24.132 "rw_ios_per_sec": 0, 00:10:24.132 "rw_mbytes_per_sec": 0, 00:10:24.132 "r_mbytes_per_sec": 0, 00:10:24.132 "w_mbytes_per_sec": 0 00:10:24.132 }, 00:10:24.132 "claimed": false, 00:10:24.132 "zoned": false, 00:10:24.132 "supported_io_types": { 00:10:24.132 "read": true, 00:10:24.132 "write": true, 00:10:24.132 "unmap": true, 00:10:24.132 "flush": true, 00:10:24.132 "reset": true, 00:10:24.132 "nvme_admin": false, 00:10:24.132 "nvme_io": false, 00:10:24.132 "nvme_io_md": false, 00:10:24.132 "write_zeroes": true, 00:10:24.132 "zcopy": true, 00:10:24.132 "get_zone_info": false, 00:10:24.132 "zone_management": false, 00:10:24.132 "zone_append": false, 00:10:24.132 "compare": false, 00:10:24.132 "compare_and_write": false, 00:10:24.132 "abort": true, 00:10:24.132 "seek_hole": false, 00:10:24.132 "seek_data": false, 00:10:24.132 "copy": true, 00:10:24.132 "nvme_iov_md": false 00:10:24.132 }, 00:10:24.132 "memory_domains": [ 00:10:24.132 { 00:10:24.132 "dma_device_id": "system", 00:10:24.132 "dma_device_type": 1 00:10:24.132 }, 00:10:24.132 { 00:10:24.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.132 "dma_device_type": 2 00:10:24.132 } 00:10:24.132 ], 00:10:24.132 "driver_specific": {} 00:10:24.132 } 00:10:24.132 ] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.132 BaseBdev3 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.132 [ 00:10:24.132 { 00:10:24.132 "name": "BaseBdev3", 00:10:24.132 "aliases": [ 00:10:24.132 "1f321f35-4253-4447-b6ed-da3d9c3e28cb" 00:10:24.132 ], 00:10:24.132 "product_name": "Malloc disk", 00:10:24.132 "block_size": 512, 00:10:24.132 "num_blocks": 65536, 00:10:24.132 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:24.132 "assigned_rate_limits": { 00:10:24.132 "rw_ios_per_sec": 0, 00:10:24.132 "rw_mbytes_per_sec": 0, 00:10:24.132 "r_mbytes_per_sec": 0, 00:10:24.132 "w_mbytes_per_sec": 0 00:10:24.132 }, 00:10:24.132 "claimed": false, 00:10:24.132 "zoned": false, 00:10:24.132 "supported_io_types": { 00:10:24.132 "read": true, 00:10:24.132 "write": true, 00:10:24.132 "unmap": true, 00:10:24.132 "flush": true, 00:10:24.132 "reset": true, 00:10:24.132 "nvme_admin": false, 00:10:24.132 "nvme_io": false, 00:10:24.132 "nvme_io_md": false, 00:10:24.132 "write_zeroes": true, 00:10:24.132 "zcopy": true, 00:10:24.132 "get_zone_info": false, 00:10:24.132 "zone_management": false, 00:10:24.132 "zone_append": false, 00:10:24.132 "compare": false, 00:10:24.132 "compare_and_write": false, 00:10:24.132 "abort": true, 00:10:24.132 "seek_hole": false, 00:10:24.132 "seek_data": false, 00:10:24.132 "copy": true, 00:10:24.132 "nvme_iov_md": false 00:10:24.132 }, 00:10:24.132 "memory_domains": [ 00:10:24.132 { 00:10:24.132 "dma_device_id": "system", 00:10:24.132 "dma_device_type": 1 00:10:24.132 }, 00:10:24.132 { 00:10:24.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.132 "dma_device_type": 2 00:10:24.132 } 00:10:24.132 ], 00:10:24.132 "driver_specific": {} 00:10:24.132 } 00:10:24.132 ] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.132 [2024-12-06 18:07:36.181352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.132 [2024-12-06 18:07:36.181486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.132 [2024-12-06 18:07:36.181521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.132 [2024-12-06 18:07:36.183713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.132 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.132 "name": "Existed_Raid", 00:10:24.132 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:24.132 "strip_size_kb": 0, 00:10:24.132 "state": "configuring", 00:10:24.132 "raid_level": "raid1", 00:10:24.132 "superblock": true, 00:10:24.132 "num_base_bdevs": 3, 00:10:24.132 "num_base_bdevs_discovered": 2, 00:10:24.132 "num_base_bdevs_operational": 3, 00:10:24.132 "base_bdevs_list": [ 00:10:24.132 { 00:10:24.133 "name": "BaseBdev1", 00:10:24.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.133 "is_configured": false, 00:10:24.133 "data_offset": 0, 00:10:24.133 "data_size": 0 00:10:24.133 }, 00:10:24.133 { 00:10:24.133 "name": "BaseBdev2", 00:10:24.133 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:24.133 "is_configured": true, 00:10:24.133 "data_offset": 2048, 00:10:24.133 "data_size": 63488 00:10:24.133 }, 00:10:24.133 { 00:10:24.133 "name": "BaseBdev3", 00:10:24.133 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:24.133 "is_configured": true, 00:10:24.133 "data_offset": 2048, 00:10:24.133 "data_size": 63488 00:10:24.133 } 00:10:24.133 ] 00:10:24.133 }' 00:10:24.133 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.133 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.701 [2024-12-06 18:07:36.676519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.701 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.701 "name": "Existed_Raid", 00:10:24.701 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:24.701 "strip_size_kb": 0, 00:10:24.701 "state": "configuring", 00:10:24.701 "raid_level": "raid1", 00:10:24.701 "superblock": true, 00:10:24.701 "num_base_bdevs": 3, 00:10:24.701 "num_base_bdevs_discovered": 1, 00:10:24.701 "num_base_bdevs_operational": 3, 00:10:24.701 "base_bdevs_list": [ 00:10:24.701 { 00:10:24.701 "name": "BaseBdev1", 00:10:24.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.701 "is_configured": false, 00:10:24.701 "data_offset": 0, 00:10:24.701 "data_size": 0 00:10:24.701 }, 00:10:24.701 { 00:10:24.701 "name": null, 00:10:24.701 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:24.701 "is_configured": false, 00:10:24.701 "data_offset": 0, 00:10:24.701 "data_size": 63488 00:10:24.701 }, 00:10:24.701 { 00:10:24.701 "name": "BaseBdev3", 00:10:24.701 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:24.701 "is_configured": true, 00:10:24.701 "data_offset": 2048, 00:10:24.701 "data_size": 63488 00:10:24.701 } 00:10:24.701 ] 00:10:24.701 }' 00:10:24.702 18:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.702 18:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.269 [2024-12-06 18:07:37.230235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.269 BaseBdev1 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.269 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.269 [ 00:10:25.269 { 00:10:25.269 "name": "BaseBdev1", 00:10:25.269 "aliases": [ 00:10:25.269 "bf628c0a-8860-474b-af55-a16f370701ad" 00:10:25.269 ], 00:10:25.269 "product_name": "Malloc disk", 00:10:25.269 "block_size": 512, 00:10:25.269 "num_blocks": 65536, 00:10:25.269 "uuid": "bf628c0a-8860-474b-af55-a16f370701ad", 00:10:25.269 "assigned_rate_limits": { 00:10:25.269 "rw_ios_per_sec": 0, 00:10:25.269 "rw_mbytes_per_sec": 0, 00:10:25.269 "r_mbytes_per_sec": 0, 00:10:25.269 "w_mbytes_per_sec": 0 00:10:25.269 }, 00:10:25.269 "claimed": true, 00:10:25.269 "claim_type": "exclusive_write", 00:10:25.269 "zoned": false, 00:10:25.269 "supported_io_types": { 00:10:25.269 "read": true, 00:10:25.269 "write": true, 00:10:25.269 "unmap": true, 00:10:25.269 "flush": true, 00:10:25.270 "reset": true, 00:10:25.270 "nvme_admin": false, 00:10:25.270 "nvme_io": false, 00:10:25.270 "nvme_io_md": false, 00:10:25.270 "write_zeroes": true, 00:10:25.270 "zcopy": true, 00:10:25.270 "get_zone_info": false, 00:10:25.270 "zone_management": false, 00:10:25.270 "zone_append": false, 00:10:25.270 "compare": false, 00:10:25.270 "compare_and_write": false, 00:10:25.270 "abort": true, 00:10:25.270 "seek_hole": false, 00:10:25.270 "seek_data": false, 00:10:25.270 "copy": true, 00:10:25.270 "nvme_iov_md": false 00:10:25.270 }, 00:10:25.270 "memory_domains": [ 00:10:25.270 { 00:10:25.270 "dma_device_id": "system", 00:10:25.270 "dma_device_type": 1 00:10:25.270 }, 00:10:25.270 { 00:10:25.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.270 "dma_device_type": 2 00:10:25.270 } 00:10:25.270 ], 00:10:25.270 "driver_specific": {} 00:10:25.270 } 00:10:25.270 ] 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.270 "name": "Existed_Raid", 00:10:25.270 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:25.270 "strip_size_kb": 0, 00:10:25.270 "state": "configuring", 00:10:25.270 "raid_level": "raid1", 00:10:25.270 "superblock": true, 00:10:25.270 "num_base_bdevs": 3, 00:10:25.270 "num_base_bdevs_discovered": 2, 00:10:25.270 "num_base_bdevs_operational": 3, 00:10:25.270 "base_bdevs_list": [ 00:10:25.270 { 00:10:25.270 "name": "BaseBdev1", 00:10:25.270 "uuid": "bf628c0a-8860-474b-af55-a16f370701ad", 00:10:25.270 "is_configured": true, 00:10:25.270 "data_offset": 2048, 00:10:25.270 "data_size": 63488 00:10:25.270 }, 00:10:25.270 { 00:10:25.270 "name": null, 00:10:25.270 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:25.270 "is_configured": false, 00:10:25.270 "data_offset": 0, 00:10:25.270 "data_size": 63488 00:10:25.270 }, 00:10:25.270 { 00:10:25.270 "name": "BaseBdev3", 00:10:25.270 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:25.270 "is_configured": true, 00:10:25.270 "data_offset": 2048, 00:10:25.270 "data_size": 63488 00:10:25.270 } 00:10:25.270 ] 00:10:25.270 }' 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.270 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.839 [2024-12-06 18:07:37.757427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.839 "name": "Existed_Raid", 00:10:25.839 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:25.839 "strip_size_kb": 0, 00:10:25.839 "state": "configuring", 00:10:25.839 "raid_level": "raid1", 00:10:25.839 "superblock": true, 00:10:25.839 "num_base_bdevs": 3, 00:10:25.839 "num_base_bdevs_discovered": 1, 00:10:25.839 "num_base_bdevs_operational": 3, 00:10:25.839 "base_bdevs_list": [ 00:10:25.839 { 00:10:25.839 "name": "BaseBdev1", 00:10:25.839 "uuid": "bf628c0a-8860-474b-af55-a16f370701ad", 00:10:25.839 "is_configured": true, 00:10:25.839 "data_offset": 2048, 00:10:25.839 "data_size": 63488 00:10:25.839 }, 00:10:25.839 { 00:10:25.839 "name": null, 00:10:25.839 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:25.839 "is_configured": false, 00:10:25.839 "data_offset": 0, 00:10:25.839 "data_size": 63488 00:10:25.839 }, 00:10:25.839 { 00:10:25.839 "name": null, 00:10:25.839 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:25.839 "is_configured": false, 00:10:25.839 "data_offset": 0, 00:10:25.839 "data_size": 63488 00:10:25.839 } 00:10:25.839 ] 00:10:25.839 }' 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.839 18:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.096 [2024-12-06 18:07:38.252677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.096 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.354 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.354 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.354 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.354 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.354 "name": "Existed_Raid", 00:10:26.354 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:26.354 "strip_size_kb": 0, 00:10:26.354 "state": "configuring", 00:10:26.354 "raid_level": "raid1", 00:10:26.354 "superblock": true, 00:10:26.354 "num_base_bdevs": 3, 00:10:26.354 "num_base_bdevs_discovered": 2, 00:10:26.354 "num_base_bdevs_operational": 3, 00:10:26.354 "base_bdevs_list": [ 00:10:26.354 { 00:10:26.354 "name": "BaseBdev1", 00:10:26.354 "uuid": "bf628c0a-8860-474b-af55-a16f370701ad", 00:10:26.354 "is_configured": true, 00:10:26.354 "data_offset": 2048, 00:10:26.354 "data_size": 63488 00:10:26.354 }, 00:10:26.354 { 00:10:26.354 "name": null, 00:10:26.354 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:26.354 "is_configured": false, 00:10:26.354 "data_offset": 0, 00:10:26.354 "data_size": 63488 00:10:26.354 }, 00:10:26.354 { 00:10:26.354 "name": "BaseBdev3", 00:10:26.354 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:26.354 "is_configured": true, 00:10:26.354 "data_offset": 2048, 00:10:26.354 "data_size": 63488 00:10:26.354 } 00:10:26.354 ] 00:10:26.354 }' 00:10:26.354 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.354 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.613 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:26.613 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.613 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.613 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.613 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.613 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:26.613 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:26.613 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.613 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.613 [2024-12-06 18:07:38.727899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.872 "name": "Existed_Raid", 00:10:26.872 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:26.872 "strip_size_kb": 0, 00:10:26.872 "state": "configuring", 00:10:26.872 "raid_level": "raid1", 00:10:26.872 "superblock": true, 00:10:26.872 "num_base_bdevs": 3, 00:10:26.872 "num_base_bdevs_discovered": 1, 00:10:26.872 "num_base_bdevs_operational": 3, 00:10:26.872 "base_bdevs_list": [ 00:10:26.872 { 00:10:26.872 "name": null, 00:10:26.872 "uuid": "bf628c0a-8860-474b-af55-a16f370701ad", 00:10:26.872 "is_configured": false, 00:10:26.872 "data_offset": 0, 00:10:26.872 "data_size": 63488 00:10:26.872 }, 00:10:26.872 { 00:10:26.872 "name": null, 00:10:26.872 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:26.872 "is_configured": false, 00:10:26.872 "data_offset": 0, 00:10:26.872 "data_size": 63488 00:10:26.872 }, 00:10:26.872 { 00:10:26.872 "name": "BaseBdev3", 00:10:26.872 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:26.872 "is_configured": true, 00:10:26.872 "data_offset": 2048, 00:10:26.872 "data_size": 63488 00:10:26.872 } 00:10:26.872 ] 00:10:26.872 }' 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.872 18:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.439 [2024-12-06 18:07:39.346810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.439 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.440 "name": "Existed_Raid", 00:10:27.440 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:27.440 "strip_size_kb": 0, 00:10:27.440 "state": "configuring", 00:10:27.440 "raid_level": "raid1", 00:10:27.440 "superblock": true, 00:10:27.440 "num_base_bdevs": 3, 00:10:27.440 "num_base_bdevs_discovered": 2, 00:10:27.440 "num_base_bdevs_operational": 3, 00:10:27.440 "base_bdevs_list": [ 00:10:27.440 { 00:10:27.440 "name": null, 00:10:27.440 "uuid": "bf628c0a-8860-474b-af55-a16f370701ad", 00:10:27.440 "is_configured": false, 00:10:27.440 "data_offset": 0, 00:10:27.440 "data_size": 63488 00:10:27.440 }, 00:10:27.440 { 00:10:27.440 "name": "BaseBdev2", 00:10:27.440 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:27.440 "is_configured": true, 00:10:27.440 "data_offset": 2048, 00:10:27.440 "data_size": 63488 00:10:27.440 }, 00:10:27.440 { 00:10:27.440 "name": "BaseBdev3", 00:10:27.440 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:27.440 "is_configured": true, 00:10:27.440 "data_offset": 2048, 00:10:27.440 "data_size": 63488 00:10:27.440 } 00:10:27.440 ] 00:10:27.440 }' 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.440 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.699 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bf628c0a-8860-474b-af55-a16f370701ad 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.957 [2024-12-06 18:07:39.917611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:27.957 NewBaseBdev 00:10:27.957 [2024-12-06 18:07:39.917998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:27.957 [2024-12-06 18:07:39.918019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.957 [2024-12-06 18:07:39.918332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:27.957 [2024-12-06 18:07:39.918501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:27.957 [2024-12-06 18:07:39.918514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:27.957 [2024-12-06 18:07:39.918679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.957 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.958 [ 00:10:27.958 { 00:10:27.958 "name": "NewBaseBdev", 00:10:27.958 "aliases": [ 00:10:27.958 "bf628c0a-8860-474b-af55-a16f370701ad" 00:10:27.958 ], 00:10:27.958 "product_name": "Malloc disk", 00:10:27.958 "block_size": 512, 00:10:27.958 "num_blocks": 65536, 00:10:27.958 "uuid": "bf628c0a-8860-474b-af55-a16f370701ad", 00:10:27.958 "assigned_rate_limits": { 00:10:27.958 "rw_ios_per_sec": 0, 00:10:27.958 "rw_mbytes_per_sec": 0, 00:10:27.958 "r_mbytes_per_sec": 0, 00:10:27.958 "w_mbytes_per_sec": 0 00:10:27.958 }, 00:10:27.958 "claimed": true, 00:10:27.958 "claim_type": "exclusive_write", 00:10:27.958 "zoned": false, 00:10:27.958 "supported_io_types": { 00:10:27.958 "read": true, 00:10:27.958 "write": true, 00:10:27.958 "unmap": true, 00:10:27.958 "flush": true, 00:10:27.958 "reset": true, 00:10:27.958 "nvme_admin": false, 00:10:27.958 "nvme_io": false, 00:10:27.958 "nvme_io_md": false, 00:10:27.958 "write_zeroes": true, 00:10:27.958 "zcopy": true, 00:10:27.958 "get_zone_info": false, 00:10:27.958 "zone_management": false, 00:10:27.958 "zone_append": false, 00:10:27.958 "compare": false, 00:10:27.958 "compare_and_write": false, 00:10:27.958 "abort": true, 00:10:27.958 "seek_hole": false, 00:10:27.958 "seek_data": false, 00:10:27.958 "copy": true, 00:10:27.958 "nvme_iov_md": false 00:10:27.958 }, 00:10:27.958 "memory_domains": [ 00:10:27.958 { 00:10:27.958 "dma_device_id": "system", 00:10:27.958 "dma_device_type": 1 00:10:27.958 }, 00:10:27.958 { 00:10:27.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.958 "dma_device_type": 2 00:10:27.958 } 00:10:27.958 ], 00:10:27.958 "driver_specific": {} 00:10:27.958 } 00:10:27.958 ] 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.958 18:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.958 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.958 "name": "Existed_Raid", 00:10:27.958 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:27.958 "strip_size_kb": 0, 00:10:27.958 "state": "online", 00:10:27.958 "raid_level": "raid1", 00:10:27.958 "superblock": true, 00:10:27.958 "num_base_bdevs": 3, 00:10:27.958 "num_base_bdevs_discovered": 3, 00:10:27.958 "num_base_bdevs_operational": 3, 00:10:27.958 "base_bdevs_list": [ 00:10:27.958 { 00:10:27.958 "name": "NewBaseBdev", 00:10:27.958 "uuid": "bf628c0a-8860-474b-af55-a16f370701ad", 00:10:27.958 "is_configured": true, 00:10:27.958 "data_offset": 2048, 00:10:27.958 "data_size": 63488 00:10:27.958 }, 00:10:27.958 { 00:10:27.958 "name": "BaseBdev2", 00:10:27.958 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:27.958 "is_configured": true, 00:10:27.958 "data_offset": 2048, 00:10:27.958 "data_size": 63488 00:10:27.958 }, 00:10:27.958 { 00:10:27.958 "name": "BaseBdev3", 00:10:27.958 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:27.958 "is_configured": true, 00:10:27.958 "data_offset": 2048, 00:10:27.958 "data_size": 63488 00:10:27.958 } 00:10:27.958 ] 00:10:27.958 }' 00:10:27.958 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.958 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.524 [2024-12-06 18:07:40.465161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.524 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.524 "name": "Existed_Raid", 00:10:28.524 "aliases": [ 00:10:28.524 "ce119936-ac66-45d1-8153-0503388c67bc" 00:10:28.524 ], 00:10:28.524 "product_name": "Raid Volume", 00:10:28.524 "block_size": 512, 00:10:28.524 "num_blocks": 63488, 00:10:28.524 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:28.524 "assigned_rate_limits": { 00:10:28.524 "rw_ios_per_sec": 0, 00:10:28.524 "rw_mbytes_per_sec": 0, 00:10:28.524 "r_mbytes_per_sec": 0, 00:10:28.524 "w_mbytes_per_sec": 0 00:10:28.524 }, 00:10:28.524 "claimed": false, 00:10:28.524 "zoned": false, 00:10:28.524 "supported_io_types": { 00:10:28.524 "read": true, 00:10:28.524 "write": true, 00:10:28.524 "unmap": false, 00:10:28.524 "flush": false, 00:10:28.524 "reset": true, 00:10:28.524 "nvme_admin": false, 00:10:28.524 "nvme_io": false, 00:10:28.524 "nvme_io_md": false, 00:10:28.524 "write_zeroes": true, 00:10:28.524 "zcopy": false, 00:10:28.524 "get_zone_info": false, 00:10:28.524 "zone_management": false, 00:10:28.524 "zone_append": false, 00:10:28.524 "compare": false, 00:10:28.524 "compare_and_write": false, 00:10:28.524 "abort": false, 00:10:28.524 "seek_hole": false, 00:10:28.524 "seek_data": false, 00:10:28.524 "copy": false, 00:10:28.524 "nvme_iov_md": false 00:10:28.524 }, 00:10:28.524 "memory_domains": [ 00:10:28.524 { 00:10:28.524 "dma_device_id": "system", 00:10:28.524 "dma_device_type": 1 00:10:28.524 }, 00:10:28.524 { 00:10:28.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.525 "dma_device_type": 2 00:10:28.525 }, 00:10:28.525 { 00:10:28.525 "dma_device_id": "system", 00:10:28.525 "dma_device_type": 1 00:10:28.525 }, 00:10:28.525 { 00:10:28.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.525 "dma_device_type": 2 00:10:28.525 }, 00:10:28.525 { 00:10:28.525 "dma_device_id": "system", 00:10:28.525 "dma_device_type": 1 00:10:28.525 }, 00:10:28.525 { 00:10:28.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.525 "dma_device_type": 2 00:10:28.525 } 00:10:28.525 ], 00:10:28.525 "driver_specific": { 00:10:28.525 "raid": { 00:10:28.525 "uuid": "ce119936-ac66-45d1-8153-0503388c67bc", 00:10:28.525 "strip_size_kb": 0, 00:10:28.525 "state": "online", 00:10:28.525 "raid_level": "raid1", 00:10:28.525 "superblock": true, 00:10:28.525 "num_base_bdevs": 3, 00:10:28.525 "num_base_bdevs_discovered": 3, 00:10:28.525 "num_base_bdevs_operational": 3, 00:10:28.525 "base_bdevs_list": [ 00:10:28.525 { 00:10:28.525 "name": "NewBaseBdev", 00:10:28.525 "uuid": "bf628c0a-8860-474b-af55-a16f370701ad", 00:10:28.525 "is_configured": true, 00:10:28.525 "data_offset": 2048, 00:10:28.525 "data_size": 63488 00:10:28.525 }, 00:10:28.525 { 00:10:28.525 "name": "BaseBdev2", 00:10:28.525 "uuid": "8e258748-ea40-43ec-829b-9efb0a2d00ad", 00:10:28.525 "is_configured": true, 00:10:28.525 "data_offset": 2048, 00:10:28.525 "data_size": 63488 00:10:28.525 }, 00:10:28.525 { 00:10:28.525 "name": "BaseBdev3", 00:10:28.525 "uuid": "1f321f35-4253-4447-b6ed-da3d9c3e28cb", 00:10:28.525 "is_configured": true, 00:10:28.525 "data_offset": 2048, 00:10:28.525 "data_size": 63488 00:10:28.525 } 00:10:28.525 ] 00:10:28.525 } 00:10:28.525 } 00:10:28.525 }' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:28.525 BaseBdev2 00:10:28.525 BaseBdev3' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.525 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.782 [2024-12-06 18:07:40.740347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.782 [2024-12-06 18:07:40.740461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.782 [2024-12-06 18:07:40.740594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.782 [2024-12-06 18:07:40.740977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.782 [2024-12-06 18:07:40.741041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68459 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68459 ']' 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68459 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68459 00:10:28.782 killing process with pid 68459 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68459' 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68459 00:10:28.782 [2024-12-06 18:07:40.788042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.782 18:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68459 00:10:29.040 [2024-12-06 18:07:41.147489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.415 ************************************ 00:10:30.415 END TEST raid_state_function_test_sb 00:10:30.415 ************************************ 00:10:30.415 18:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:30.415 00:10:30.415 real 0m11.379s 00:10:30.415 user 0m18.017s 00:10:30.415 sys 0m1.797s 00:10:30.415 18:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.415 18:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.415 18:07:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:30.415 18:07:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:30.415 18:07:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.415 18:07:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.415 ************************************ 00:10:30.415 START TEST raid_superblock_test 00:10:30.416 ************************************ 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69090 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69090 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 69090 ']' 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.416 18:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.674 [2024-12-06 18:07:42.664946] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:10:30.674 [2024-12-06 18:07:42.665159] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69090 ] 00:10:30.931 [2024-12-06 18:07:42.842757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.931 [2024-12-06 18:07:42.974463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.189 [2024-12-06 18:07:43.210989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.189 [2024-12-06 18:07:43.211086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.447 malloc1 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.447 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.447 [2024-12-06 18:07:43.587674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:31.447 [2024-12-06 18:07:43.587810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.448 [2024-12-06 18:07:43.587861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:31.448 [2024-12-06 18:07:43.587906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.448 [2024-12-06 18:07:43.590515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.448 [2024-12-06 18:07:43.590611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:31.448 pt1 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.448 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.706 malloc2 00:10:31.706 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.706 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:31.706 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.706 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.707 [2024-12-06 18:07:43.645244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:31.707 [2024-12-06 18:07:43.645362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.707 [2024-12-06 18:07:43.645398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:31.707 [2024-12-06 18:07:43.645410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.707 [2024-12-06 18:07:43.647975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.707 [2024-12-06 18:07:43.648026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:31.707 pt2 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.707 malloc3 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.707 [2024-12-06 18:07:43.713268] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:31.707 [2024-12-06 18:07:43.713383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.707 [2024-12-06 18:07:43.713433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:31.707 [2024-12-06 18:07:43.713478] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.707 [2024-12-06 18:07:43.716126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.707 [2024-12-06 18:07:43.716215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:31.707 pt3 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.707 [2024-12-06 18:07:43.721312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:31.707 [2024-12-06 18:07:43.723561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:31.707 [2024-12-06 18:07:43.723719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:31.707 [2024-12-06 18:07:43.723963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:31.707 [2024-12-06 18:07:43.723991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.707 [2024-12-06 18:07:43.724357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:31.707 [2024-12-06 18:07:43.724637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:31.707 [2024-12-06 18:07:43.724658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:31.707 [2024-12-06 18:07:43.724869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.707 "name": "raid_bdev1", 00:10:31.707 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:31.707 "strip_size_kb": 0, 00:10:31.707 "state": "online", 00:10:31.707 "raid_level": "raid1", 00:10:31.707 "superblock": true, 00:10:31.707 "num_base_bdevs": 3, 00:10:31.707 "num_base_bdevs_discovered": 3, 00:10:31.707 "num_base_bdevs_operational": 3, 00:10:31.707 "base_bdevs_list": [ 00:10:31.707 { 00:10:31.707 "name": "pt1", 00:10:31.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.707 "is_configured": true, 00:10:31.707 "data_offset": 2048, 00:10:31.707 "data_size": 63488 00:10:31.707 }, 00:10:31.707 { 00:10:31.707 "name": "pt2", 00:10:31.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.707 "is_configured": true, 00:10:31.707 "data_offset": 2048, 00:10:31.707 "data_size": 63488 00:10:31.707 }, 00:10:31.707 { 00:10:31.707 "name": "pt3", 00:10:31.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.707 "is_configured": true, 00:10:31.707 "data_offset": 2048, 00:10:31.707 "data_size": 63488 00:10:31.707 } 00:10:31.707 ] 00:10:31.707 }' 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.707 18:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.275 [2024-12-06 18:07:44.208839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.275 "name": "raid_bdev1", 00:10:32.275 "aliases": [ 00:10:32.275 "4fa297f4-343b-452b-8b25-a6e306dc0d8b" 00:10:32.275 ], 00:10:32.275 "product_name": "Raid Volume", 00:10:32.275 "block_size": 512, 00:10:32.275 "num_blocks": 63488, 00:10:32.275 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:32.275 "assigned_rate_limits": { 00:10:32.275 "rw_ios_per_sec": 0, 00:10:32.275 "rw_mbytes_per_sec": 0, 00:10:32.275 "r_mbytes_per_sec": 0, 00:10:32.275 "w_mbytes_per_sec": 0 00:10:32.275 }, 00:10:32.275 "claimed": false, 00:10:32.275 "zoned": false, 00:10:32.275 "supported_io_types": { 00:10:32.275 "read": true, 00:10:32.275 "write": true, 00:10:32.275 "unmap": false, 00:10:32.275 "flush": false, 00:10:32.275 "reset": true, 00:10:32.275 "nvme_admin": false, 00:10:32.275 "nvme_io": false, 00:10:32.275 "nvme_io_md": false, 00:10:32.275 "write_zeroes": true, 00:10:32.275 "zcopy": false, 00:10:32.275 "get_zone_info": false, 00:10:32.275 "zone_management": false, 00:10:32.275 "zone_append": false, 00:10:32.275 "compare": false, 00:10:32.275 "compare_and_write": false, 00:10:32.275 "abort": false, 00:10:32.275 "seek_hole": false, 00:10:32.275 "seek_data": false, 00:10:32.275 "copy": false, 00:10:32.275 "nvme_iov_md": false 00:10:32.275 }, 00:10:32.275 "memory_domains": [ 00:10:32.275 { 00:10:32.275 "dma_device_id": "system", 00:10:32.275 "dma_device_type": 1 00:10:32.275 }, 00:10:32.275 { 00:10:32.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.275 "dma_device_type": 2 00:10:32.275 }, 00:10:32.275 { 00:10:32.275 "dma_device_id": "system", 00:10:32.275 "dma_device_type": 1 00:10:32.275 }, 00:10:32.275 { 00:10:32.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.275 "dma_device_type": 2 00:10:32.275 }, 00:10:32.275 { 00:10:32.275 "dma_device_id": "system", 00:10:32.275 "dma_device_type": 1 00:10:32.275 }, 00:10:32.275 { 00:10:32.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.275 "dma_device_type": 2 00:10:32.275 } 00:10:32.275 ], 00:10:32.275 "driver_specific": { 00:10:32.275 "raid": { 00:10:32.275 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:32.275 "strip_size_kb": 0, 00:10:32.275 "state": "online", 00:10:32.275 "raid_level": "raid1", 00:10:32.275 "superblock": true, 00:10:32.275 "num_base_bdevs": 3, 00:10:32.275 "num_base_bdevs_discovered": 3, 00:10:32.275 "num_base_bdevs_operational": 3, 00:10:32.275 "base_bdevs_list": [ 00:10:32.275 { 00:10:32.275 "name": "pt1", 00:10:32.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.275 "is_configured": true, 00:10:32.275 "data_offset": 2048, 00:10:32.275 "data_size": 63488 00:10:32.275 }, 00:10:32.275 { 00:10:32.275 "name": "pt2", 00:10:32.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.275 "is_configured": true, 00:10:32.275 "data_offset": 2048, 00:10:32.275 "data_size": 63488 00:10:32.275 }, 00:10:32.275 { 00:10:32.275 "name": "pt3", 00:10:32.275 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.275 "is_configured": true, 00:10:32.275 "data_offset": 2048, 00:10:32.275 "data_size": 63488 00:10:32.275 } 00:10:32.275 ] 00:10:32.275 } 00:10:32.275 } 00:10:32.275 }' 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:32.275 pt2 00:10:32.275 pt3' 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:32.275 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.276 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 [2024-12-06 18:07:44.484429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4fa297f4-343b-452b-8b25-a6e306dc0d8b 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4fa297f4-343b-452b-8b25-a6e306dc0d8b ']' 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 [2024-12-06 18:07:44.512007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.535 [2024-12-06 18:07:44.512144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.535 [2024-12-06 18:07:44.512281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.535 [2024-12-06 18:07:44.512407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.535 [2024-12-06 18:07:44.512462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.535 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.535 [2024-12-06 18:07:44.647861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:32.535 [2024-12-06 18:07:44.650048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:32.535 [2024-12-06 18:07:44.650207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:32.535 [2024-12-06 18:07:44.650302] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:32.535 [2024-12-06 18:07:44.650412] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:32.535 [2024-12-06 18:07:44.650440] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:32.535 [2024-12-06 18:07:44.650459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.535 [2024-12-06 18:07:44.650471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:32.535 request: 00:10:32.535 { 00:10:32.535 "name": "raid_bdev1", 00:10:32.535 "raid_level": "raid1", 00:10:32.535 "base_bdevs": [ 00:10:32.535 "malloc1", 00:10:32.535 "malloc2", 00:10:32.535 "malloc3" 00:10:32.535 ], 00:10:32.536 "superblock": false, 00:10:32.536 "method": "bdev_raid_create", 00:10:32.536 "req_id": 1 00:10:32.536 } 00:10:32.536 Got JSON-RPC error response 00:10:32.536 response: 00:10:32.536 { 00:10:32.536 "code": -17, 00:10:32.536 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:32.536 } 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.536 [2024-12-06 18:07:44.691718] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:32.536 [2024-12-06 18:07:44.691858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.536 [2024-12-06 18:07:44.691904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:32.536 [2024-12-06 18:07:44.691950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.536 [2024-12-06 18:07:44.694608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.536 [2024-12-06 18:07:44.694740] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:32.536 [2024-12-06 18:07:44.694888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:32.536 [2024-12-06 18:07:44.695000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:32.536 pt1 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.536 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.795 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.795 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.795 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.795 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.795 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.795 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.795 "name": "raid_bdev1", 00:10:32.795 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:32.795 "strip_size_kb": 0, 00:10:32.795 "state": "configuring", 00:10:32.795 "raid_level": "raid1", 00:10:32.795 "superblock": true, 00:10:32.795 "num_base_bdevs": 3, 00:10:32.795 "num_base_bdevs_discovered": 1, 00:10:32.795 "num_base_bdevs_operational": 3, 00:10:32.795 "base_bdevs_list": [ 00:10:32.795 { 00:10:32.795 "name": "pt1", 00:10:32.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.795 "is_configured": true, 00:10:32.795 "data_offset": 2048, 00:10:32.795 "data_size": 63488 00:10:32.795 }, 00:10:32.795 { 00:10:32.795 "name": null, 00:10:32.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.795 "is_configured": false, 00:10:32.795 "data_offset": 2048, 00:10:32.795 "data_size": 63488 00:10:32.795 }, 00:10:32.795 { 00:10:32.795 "name": null, 00:10:32.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.795 "is_configured": false, 00:10:32.795 "data_offset": 2048, 00:10:32.795 "data_size": 63488 00:10:32.795 } 00:10:32.795 ] 00:10:32.795 }' 00:10:32.795 18:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.795 18:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.064 [2024-12-06 18:07:45.127145] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.064 [2024-12-06 18:07:45.127322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.064 [2024-12-06 18:07:45.127375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:33.064 [2024-12-06 18:07:45.127411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.064 [2024-12-06 18:07:45.127980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.064 [2024-12-06 18:07:45.128048] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.064 [2024-12-06 18:07:45.128214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:33.064 [2024-12-06 18:07:45.128280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.064 pt2 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.064 [2024-12-06 18:07:45.135125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.064 "name": "raid_bdev1", 00:10:33.064 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:33.064 "strip_size_kb": 0, 00:10:33.064 "state": "configuring", 00:10:33.064 "raid_level": "raid1", 00:10:33.064 "superblock": true, 00:10:33.064 "num_base_bdevs": 3, 00:10:33.064 "num_base_bdevs_discovered": 1, 00:10:33.064 "num_base_bdevs_operational": 3, 00:10:33.064 "base_bdevs_list": [ 00:10:33.064 { 00:10:33.064 "name": "pt1", 00:10:33.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.064 "is_configured": true, 00:10:33.064 "data_offset": 2048, 00:10:33.064 "data_size": 63488 00:10:33.064 }, 00:10:33.064 { 00:10:33.064 "name": null, 00:10:33.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.064 "is_configured": false, 00:10:33.064 "data_offset": 0, 00:10:33.064 "data_size": 63488 00:10:33.064 }, 00:10:33.064 { 00:10:33.064 "name": null, 00:10:33.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.064 "is_configured": false, 00:10:33.064 "data_offset": 2048, 00:10:33.064 "data_size": 63488 00:10:33.064 } 00:10:33.064 ] 00:10:33.064 }' 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.064 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.639 [2024-12-06 18:07:45.622270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.639 [2024-12-06 18:07:45.622444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.639 [2024-12-06 18:07:45.622493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:33.639 [2024-12-06 18:07:45.622554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.639 [2024-12-06 18:07:45.623154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.639 [2024-12-06 18:07:45.623241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.639 [2024-12-06 18:07:45.623351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:33.639 [2024-12-06 18:07:45.623396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.639 pt2 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.639 [2024-12-06 18:07:45.630264] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:33.639 [2024-12-06 18:07:45.630382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.639 [2024-12-06 18:07:45.630425] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:33.639 [2024-12-06 18:07:45.630483] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.639 [2024-12-06 18:07:45.631030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.639 [2024-12-06 18:07:45.631137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:33.639 [2024-12-06 18:07:45.631282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:33.639 [2024-12-06 18:07:45.631349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:33.639 [2024-12-06 18:07:45.631542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:33.639 [2024-12-06 18:07:45.631592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:33.639 [2024-12-06 18:07:45.631897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:33.639 [2024-12-06 18:07:45.632143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:33.639 [2024-12-06 18:07:45.632195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:33.639 [2024-12-06 18:07:45.632410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.639 pt3 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.639 "name": "raid_bdev1", 00:10:33.639 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:33.639 "strip_size_kb": 0, 00:10:33.639 "state": "online", 00:10:33.639 "raid_level": "raid1", 00:10:33.639 "superblock": true, 00:10:33.639 "num_base_bdevs": 3, 00:10:33.639 "num_base_bdevs_discovered": 3, 00:10:33.639 "num_base_bdevs_operational": 3, 00:10:33.639 "base_bdevs_list": [ 00:10:33.639 { 00:10:33.639 "name": "pt1", 00:10:33.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.639 "is_configured": true, 00:10:33.639 "data_offset": 2048, 00:10:33.639 "data_size": 63488 00:10:33.639 }, 00:10:33.639 { 00:10:33.639 "name": "pt2", 00:10:33.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.639 "is_configured": true, 00:10:33.639 "data_offset": 2048, 00:10:33.639 "data_size": 63488 00:10:33.639 }, 00:10:33.639 { 00:10:33.639 "name": "pt3", 00:10:33.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.639 "is_configured": true, 00:10:33.639 "data_offset": 2048, 00:10:33.639 "data_size": 63488 00:10:33.639 } 00:10:33.639 ] 00:10:33.639 }' 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.639 18:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.207 [2024-12-06 18:07:46.093869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.207 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.207 "name": "raid_bdev1", 00:10:34.207 "aliases": [ 00:10:34.207 "4fa297f4-343b-452b-8b25-a6e306dc0d8b" 00:10:34.207 ], 00:10:34.207 "product_name": "Raid Volume", 00:10:34.207 "block_size": 512, 00:10:34.207 "num_blocks": 63488, 00:10:34.207 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:34.207 "assigned_rate_limits": { 00:10:34.207 "rw_ios_per_sec": 0, 00:10:34.207 "rw_mbytes_per_sec": 0, 00:10:34.207 "r_mbytes_per_sec": 0, 00:10:34.207 "w_mbytes_per_sec": 0 00:10:34.207 }, 00:10:34.207 "claimed": false, 00:10:34.207 "zoned": false, 00:10:34.207 "supported_io_types": { 00:10:34.207 "read": true, 00:10:34.207 "write": true, 00:10:34.207 "unmap": false, 00:10:34.207 "flush": false, 00:10:34.207 "reset": true, 00:10:34.207 "nvme_admin": false, 00:10:34.207 "nvme_io": false, 00:10:34.207 "nvme_io_md": false, 00:10:34.207 "write_zeroes": true, 00:10:34.207 "zcopy": false, 00:10:34.207 "get_zone_info": false, 00:10:34.207 "zone_management": false, 00:10:34.207 "zone_append": false, 00:10:34.207 "compare": false, 00:10:34.207 "compare_and_write": false, 00:10:34.207 "abort": false, 00:10:34.207 "seek_hole": false, 00:10:34.207 "seek_data": false, 00:10:34.207 "copy": false, 00:10:34.207 "nvme_iov_md": false 00:10:34.207 }, 00:10:34.207 "memory_domains": [ 00:10:34.207 { 00:10:34.207 "dma_device_id": "system", 00:10:34.207 "dma_device_type": 1 00:10:34.207 }, 00:10:34.207 { 00:10:34.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.207 "dma_device_type": 2 00:10:34.207 }, 00:10:34.207 { 00:10:34.207 "dma_device_id": "system", 00:10:34.207 "dma_device_type": 1 00:10:34.207 }, 00:10:34.207 { 00:10:34.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.207 "dma_device_type": 2 00:10:34.207 }, 00:10:34.207 { 00:10:34.207 "dma_device_id": "system", 00:10:34.207 "dma_device_type": 1 00:10:34.207 }, 00:10:34.207 { 00:10:34.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.207 "dma_device_type": 2 00:10:34.207 } 00:10:34.207 ], 00:10:34.207 "driver_specific": { 00:10:34.207 "raid": { 00:10:34.207 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:34.207 "strip_size_kb": 0, 00:10:34.207 "state": "online", 00:10:34.207 "raid_level": "raid1", 00:10:34.207 "superblock": true, 00:10:34.207 "num_base_bdevs": 3, 00:10:34.207 "num_base_bdevs_discovered": 3, 00:10:34.207 "num_base_bdevs_operational": 3, 00:10:34.207 "base_bdevs_list": [ 00:10:34.207 { 00:10:34.207 "name": "pt1", 00:10:34.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.207 "is_configured": true, 00:10:34.207 "data_offset": 2048, 00:10:34.207 "data_size": 63488 00:10:34.208 }, 00:10:34.208 { 00:10:34.208 "name": "pt2", 00:10:34.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.208 "is_configured": true, 00:10:34.208 "data_offset": 2048, 00:10:34.208 "data_size": 63488 00:10:34.208 }, 00:10:34.208 { 00:10:34.208 "name": "pt3", 00:10:34.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.208 "is_configured": true, 00:10:34.208 "data_offset": 2048, 00:10:34.208 "data_size": 63488 00:10:34.208 } 00:10:34.208 ] 00:10:34.208 } 00:10:34.208 } 00:10:34.208 }' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:34.208 pt2 00:10:34.208 pt3' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.208 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:34.467 [2024-12-06 18:07:46.397404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4fa297f4-343b-452b-8b25-a6e306dc0d8b '!=' 4fa297f4-343b-452b-8b25-a6e306dc0d8b ']' 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.467 [2024-12-06 18:07:46.445038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.467 "name": "raid_bdev1", 00:10:34.467 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:34.467 "strip_size_kb": 0, 00:10:34.467 "state": "online", 00:10:34.467 "raid_level": "raid1", 00:10:34.467 "superblock": true, 00:10:34.467 "num_base_bdevs": 3, 00:10:34.467 "num_base_bdevs_discovered": 2, 00:10:34.467 "num_base_bdevs_operational": 2, 00:10:34.467 "base_bdevs_list": [ 00:10:34.467 { 00:10:34.467 "name": null, 00:10:34.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.467 "is_configured": false, 00:10:34.467 "data_offset": 0, 00:10:34.467 "data_size": 63488 00:10:34.467 }, 00:10:34.467 { 00:10:34.467 "name": "pt2", 00:10:34.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.467 "is_configured": true, 00:10:34.467 "data_offset": 2048, 00:10:34.467 "data_size": 63488 00:10:34.467 }, 00:10:34.467 { 00:10:34.467 "name": "pt3", 00:10:34.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.467 "is_configured": true, 00:10:34.467 "data_offset": 2048, 00:10:34.467 "data_size": 63488 00:10:34.467 } 00:10:34.467 ] 00:10:34.467 }' 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.467 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.035 [2024-12-06 18:07:46.916262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.035 [2024-12-06 18:07:46.916379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.035 [2024-12-06 18:07:46.916507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.035 [2024-12-06 18:07:46.916584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.035 [2024-12-06 18:07:46.916602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.035 [2024-12-06 18:07:46.976206] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:35.035 [2024-12-06 18:07:46.976334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.035 [2024-12-06 18:07:46.976384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:35.035 [2024-12-06 18:07:46.976420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.035 [2024-12-06 18:07:46.978986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.035 [2024-12-06 18:07:46.979099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:35.035 [2024-12-06 18:07:46.979258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:35.035 [2024-12-06 18:07:46.979367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.035 pt2 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.035 18:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.035 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.035 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.035 "name": "raid_bdev1", 00:10:35.035 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:35.035 "strip_size_kb": 0, 00:10:35.035 "state": "configuring", 00:10:35.035 "raid_level": "raid1", 00:10:35.035 "superblock": true, 00:10:35.035 "num_base_bdevs": 3, 00:10:35.035 "num_base_bdevs_discovered": 1, 00:10:35.035 "num_base_bdevs_operational": 2, 00:10:35.035 "base_bdevs_list": [ 00:10:35.035 { 00:10:35.035 "name": null, 00:10:35.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.035 "is_configured": false, 00:10:35.035 "data_offset": 2048, 00:10:35.035 "data_size": 63488 00:10:35.035 }, 00:10:35.035 { 00:10:35.035 "name": "pt2", 00:10:35.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.035 "is_configured": true, 00:10:35.035 "data_offset": 2048, 00:10:35.035 "data_size": 63488 00:10:35.035 }, 00:10:35.035 { 00:10:35.035 "name": null, 00:10:35.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:35.035 "is_configured": false, 00:10:35.035 "data_offset": 2048, 00:10:35.035 "data_size": 63488 00:10:35.035 } 00:10:35.035 ] 00:10:35.035 }' 00:10:35.035 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.035 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.605 [2024-12-06 18:07:47.471384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:35.605 [2024-12-06 18:07:47.471526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.605 [2024-12-06 18:07:47.471581] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:35.605 [2024-12-06 18:07:47.471621] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.605 [2024-12-06 18:07:47.472224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.605 [2024-12-06 18:07:47.472297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:35.605 [2024-12-06 18:07:47.472451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:35.605 [2024-12-06 18:07:47.472516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:35.605 [2024-12-06 18:07:47.472680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:35.605 [2024-12-06 18:07:47.472727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:35.605 [2024-12-06 18:07:47.473099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:35.605 [2024-12-06 18:07:47.473325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:35.605 [2024-12-06 18:07:47.473374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:35.605 [2024-12-06 18:07:47.473595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.605 pt3 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.605 "name": "raid_bdev1", 00:10:35.605 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:35.605 "strip_size_kb": 0, 00:10:35.605 "state": "online", 00:10:35.605 "raid_level": "raid1", 00:10:35.605 "superblock": true, 00:10:35.605 "num_base_bdevs": 3, 00:10:35.605 "num_base_bdevs_discovered": 2, 00:10:35.605 "num_base_bdevs_operational": 2, 00:10:35.605 "base_bdevs_list": [ 00:10:35.605 { 00:10:35.605 "name": null, 00:10:35.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.605 "is_configured": false, 00:10:35.605 "data_offset": 2048, 00:10:35.605 "data_size": 63488 00:10:35.605 }, 00:10:35.605 { 00:10:35.605 "name": "pt2", 00:10:35.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.605 "is_configured": true, 00:10:35.605 "data_offset": 2048, 00:10:35.605 "data_size": 63488 00:10:35.605 }, 00:10:35.605 { 00:10:35.605 "name": "pt3", 00:10:35.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:35.605 "is_configured": true, 00:10:35.605 "data_offset": 2048, 00:10:35.605 "data_size": 63488 00:10:35.605 } 00:10:35.605 ] 00:10:35.605 }' 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.605 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.866 [2024-12-06 18:07:47.950708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.866 [2024-12-06 18:07:47.950832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.866 [2024-12-06 18:07:47.950934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.866 [2024-12-06 18:07:47.951010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.866 [2024-12-06 18:07:47.951022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.866 18:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.866 [2024-12-06 18:07:48.002656] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:35.866 [2024-12-06 18:07:48.002796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.866 [2024-12-06 18:07:48.002840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:35.866 [2024-12-06 18:07:48.002896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.866 [2024-12-06 18:07:48.005581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.866 [2024-12-06 18:07:48.005629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:35.866 [2024-12-06 18:07:48.005743] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:35.866 [2024-12-06 18:07:48.005807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:35.866 [2024-12-06 18:07:48.005997] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater pt1 00:10:35.866 than existing raid bdev raid_bdev1 (2) 00:10:35.866 [2024-12-06 18:07:48.006086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.866 [2024-12-06 18:07:48.006114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:35.866 [2024-12-06 18:07:48.006200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.866 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.125 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.125 "name": "raid_bdev1", 00:10:36.125 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:36.125 "strip_size_kb": 0, 00:10:36.125 "state": "configuring", 00:10:36.125 "raid_level": "raid1", 00:10:36.125 "superblock": true, 00:10:36.125 "num_base_bdevs": 3, 00:10:36.125 "num_base_bdevs_discovered": 1, 00:10:36.125 "num_base_bdevs_operational": 2, 00:10:36.125 "base_bdevs_list": [ 00:10:36.125 { 00:10:36.125 "name": null, 00:10:36.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.125 "is_configured": false, 00:10:36.125 "data_offset": 2048, 00:10:36.125 "data_size": 63488 00:10:36.125 }, 00:10:36.125 { 00:10:36.125 "name": "pt2", 00:10:36.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.125 "is_configured": true, 00:10:36.125 "data_offset": 2048, 00:10:36.125 "data_size": 63488 00:10:36.125 }, 00:10:36.125 { 00:10:36.125 "name": null, 00:10:36.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.125 "is_configured": false, 00:10:36.125 "data_offset": 2048, 00:10:36.125 "data_size": 63488 00:10:36.125 } 00:10:36.125 ] 00:10:36.125 }' 00:10:36.125 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.125 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.384 [2024-12-06 18:07:48.497916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:36.384 [2024-12-06 18:07:48.498056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.384 [2024-12-06 18:07:48.498144] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:36.384 [2024-12-06 18:07:48.498183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.384 [2024-12-06 18:07:48.498771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.384 [2024-12-06 18:07:48.498837] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:36.384 [2024-12-06 18:07:48.499007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:36.384 [2024-12-06 18:07:48.499082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:36.384 [2024-12-06 18:07:48.499282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:36.384 [2024-12-06 18:07:48.499329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.384 [2024-12-06 18:07:48.499634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:36.384 [2024-12-06 18:07:48.499831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:36.384 [2024-12-06 18:07:48.499848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:36.384 [2024-12-06 18:07:48.500026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.384 pt3 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.384 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.385 "name": "raid_bdev1", 00:10:36.385 "uuid": "4fa297f4-343b-452b-8b25-a6e306dc0d8b", 00:10:36.385 "strip_size_kb": 0, 00:10:36.385 "state": "online", 00:10:36.385 "raid_level": "raid1", 00:10:36.385 "superblock": true, 00:10:36.385 "num_base_bdevs": 3, 00:10:36.385 "num_base_bdevs_discovered": 2, 00:10:36.385 "num_base_bdevs_operational": 2, 00:10:36.385 "base_bdevs_list": [ 00:10:36.385 { 00:10:36.385 "name": null, 00:10:36.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.385 "is_configured": false, 00:10:36.385 "data_offset": 2048, 00:10:36.385 "data_size": 63488 00:10:36.385 }, 00:10:36.385 { 00:10:36.385 "name": "pt2", 00:10:36.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.385 "is_configured": true, 00:10:36.385 "data_offset": 2048, 00:10:36.385 "data_size": 63488 00:10:36.385 }, 00:10:36.385 { 00:10:36.385 "name": "pt3", 00:10:36.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.385 "is_configured": true, 00:10:36.385 "data_offset": 2048, 00:10:36.385 "data_size": 63488 00:10:36.385 } 00:10:36.385 ] 00:10:36.385 }' 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.385 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.995 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:36.995 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.995 18:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:36.995 18:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.995 [2024-12-06 18:07:49.033393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4fa297f4-343b-452b-8b25-a6e306dc0d8b '!=' 4fa297f4-343b-452b-8b25-a6e306dc0d8b ']' 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69090 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 69090 ']' 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 69090 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69090 00:10:36.995 killing process with pid 69090 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69090' 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 69090 00:10:36.995 18:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 69090 00:10:36.995 [2024-12-06 18:07:49.107426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.995 [2024-12-06 18:07:49.107577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.995 [2024-12-06 18:07:49.107664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.995 [2024-12-06 18:07:49.107679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:37.563 [2024-12-06 18:07:49.474468] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.960 18:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:38.960 00:10:38.960 real 0m8.250s 00:10:38.960 user 0m12.848s 00:10:38.960 sys 0m1.354s 00:10:38.960 18:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.960 18:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.960 ************************************ 00:10:38.960 END TEST raid_superblock_test 00:10:38.960 ************************************ 00:10:38.960 18:07:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:38.960 18:07:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.960 18:07:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.960 18:07:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.960 ************************************ 00:10:38.960 START TEST raid_read_error_test 00:10:38.960 ************************************ 00:10:38.960 18:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:38.960 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:38.960 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:38.960 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:38.960 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:38.960 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YCsA4Tibkb 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69537 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69537 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69537 ']' 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.961 18:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.961 [2024-12-06 18:07:50.981131] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:10:38.961 [2024-12-06 18:07:50.981373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69537 ] 00:10:39.219 [2024-12-06 18:07:51.143039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.219 [2024-12-06 18:07:51.278457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.478 [2024-12-06 18:07:51.515984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.478 [2024-12-06 18:07:51.516159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 BaseBdev1_malloc 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 true 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 [2024-12-06 18:07:51.974014] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:40.046 [2024-12-06 18:07:51.974174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.046 [2024-12-06 18:07:51.974225] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:40.046 [2024-12-06 18:07:51.974278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.046 [2024-12-06 18:07:51.976877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.046 BaseBdev1 00:10:40.046 [2024-12-06 18:07:51.976980] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.046 18:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 BaseBdev2_malloc 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 true 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 [2024-12-06 18:07:52.035151] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:40.046 [2024-12-06 18:07:52.035288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.046 [2024-12-06 18:07:52.035335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:40.046 [2024-12-06 18:07:52.035533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.046 [2024-12-06 18:07:52.038145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.046 [2024-12-06 18:07:52.038261] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:40.046 BaseBdev2 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 BaseBdev3_malloc 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 true 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.046 [2024-12-06 18:07:52.110400] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:40.046 [2024-12-06 18:07:52.110520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.046 [2024-12-06 18:07:52.110565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:40.046 [2024-12-06 18:07:52.110603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.046 [2024-12-06 18:07:52.113314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.046 [2024-12-06 18:07:52.113457] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:40.046 BaseBdev3 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:40.046 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.047 [2024-12-06 18:07:52.118558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.047 [2024-12-06 18:07:52.120819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.047 [2024-12-06 18:07:52.120999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.047 [2024-12-06 18:07:52.121382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:40.047 [2024-12-06 18:07:52.121451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.047 [2024-12-06 18:07:52.121827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:40.047 [2024-12-06 18:07:52.122106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:40.047 [2024-12-06 18:07:52.122162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:40.047 [2024-12-06 18:07:52.122472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.047 "name": "raid_bdev1", 00:10:40.047 "uuid": "fe4bee4f-f780-4ad5-8a6d-50c4b4a6e6d9", 00:10:40.047 "strip_size_kb": 0, 00:10:40.047 "state": "online", 00:10:40.047 "raid_level": "raid1", 00:10:40.047 "superblock": true, 00:10:40.047 "num_base_bdevs": 3, 00:10:40.047 "num_base_bdevs_discovered": 3, 00:10:40.047 "num_base_bdevs_operational": 3, 00:10:40.047 "base_bdevs_list": [ 00:10:40.047 { 00:10:40.047 "name": "BaseBdev1", 00:10:40.047 "uuid": "def31ae2-7b06-5b84-9ed2-fecabdabdfcf", 00:10:40.047 "is_configured": true, 00:10:40.047 "data_offset": 2048, 00:10:40.047 "data_size": 63488 00:10:40.047 }, 00:10:40.047 { 00:10:40.047 "name": "BaseBdev2", 00:10:40.047 "uuid": "7d9c036f-9d9d-565c-8c65-0d59f1091518", 00:10:40.047 "is_configured": true, 00:10:40.047 "data_offset": 2048, 00:10:40.047 "data_size": 63488 00:10:40.047 }, 00:10:40.047 { 00:10:40.047 "name": "BaseBdev3", 00:10:40.047 "uuid": "a915b7c1-39a3-5479-ae1f-0ccf425fc3af", 00:10:40.047 "is_configured": true, 00:10:40.047 "data_offset": 2048, 00:10:40.047 "data_size": 63488 00:10:40.047 } 00:10:40.047 ] 00:10:40.047 }' 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.047 18:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.613 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:40.614 18:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:40.614 [2024-12-06 18:07:52.711341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.548 "name": "raid_bdev1", 00:10:41.548 "uuid": "fe4bee4f-f780-4ad5-8a6d-50c4b4a6e6d9", 00:10:41.548 "strip_size_kb": 0, 00:10:41.548 "state": "online", 00:10:41.548 "raid_level": "raid1", 00:10:41.548 "superblock": true, 00:10:41.548 "num_base_bdevs": 3, 00:10:41.548 "num_base_bdevs_discovered": 3, 00:10:41.548 "num_base_bdevs_operational": 3, 00:10:41.548 "base_bdevs_list": [ 00:10:41.548 { 00:10:41.548 "name": "BaseBdev1", 00:10:41.548 "uuid": "def31ae2-7b06-5b84-9ed2-fecabdabdfcf", 00:10:41.548 "is_configured": true, 00:10:41.548 "data_offset": 2048, 00:10:41.548 "data_size": 63488 00:10:41.548 }, 00:10:41.548 { 00:10:41.548 "name": "BaseBdev2", 00:10:41.548 "uuid": "7d9c036f-9d9d-565c-8c65-0d59f1091518", 00:10:41.548 "is_configured": true, 00:10:41.548 "data_offset": 2048, 00:10:41.548 "data_size": 63488 00:10:41.548 }, 00:10:41.548 { 00:10:41.548 "name": "BaseBdev3", 00:10:41.548 "uuid": "a915b7c1-39a3-5479-ae1f-0ccf425fc3af", 00:10:41.548 "is_configured": true, 00:10:41.548 "data_offset": 2048, 00:10:41.548 "data_size": 63488 00:10:41.548 } 00:10:41.548 ] 00:10:41.548 }' 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.548 18:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.115 18:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.115 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.115 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.115 [2024-12-06 18:07:54.051592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.115 [2024-12-06 18:07:54.051726] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.115 [2024-12-06 18:07:54.055142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.115 [2024-12-06 18:07:54.055223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.115 [2024-12-06 18:07:54.055347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.115 [2024-12-06 18:07:54.055360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:42.115 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.115 { 00:10:42.116 "results": [ 00:10:42.116 { 00:10:42.116 "job": "raid_bdev1", 00:10:42.116 "core_mask": "0x1", 00:10:42.116 "workload": "randrw", 00:10:42.116 "percentage": 50, 00:10:42.116 "status": "finished", 00:10:42.116 "queue_depth": 1, 00:10:42.116 "io_size": 131072, 00:10:42.116 "runtime": 1.339434, 00:10:42.116 "iops": 10927.750079511197, 00:10:42.116 "mibps": 1365.9687599388997, 00:10:42.116 "io_failed": 0, 00:10:42.116 "io_timeout": 0, 00:10:42.116 "avg_latency_us": 88.2005655942215, 00:10:42.116 "min_latency_us": 29.959825327510917, 00:10:42.116 "max_latency_us": 1817.2646288209608 00:10:42.116 } 00:10:42.116 ], 00:10:42.116 "core_count": 1 00:10:42.116 } 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69537 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69537 ']' 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69537 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69537 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.116 killing process with pid 69537 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69537' 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69537 00:10:42.116 18:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69537 00:10:42.116 [2024-12-06 18:07:54.086012] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.373 [2024-12-06 18:07:54.363823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YCsA4Tibkb 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:43.745 ************************************ 00:10:43.745 END TEST raid_read_error_test 00:10:43.745 ************************************ 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:43.745 00:10:43.745 real 0m4.934s 00:10:43.745 user 0m5.897s 00:10:43.745 sys 0m0.589s 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.745 18:07:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.745 18:07:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:43.745 18:07:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.745 18:07:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.745 18:07:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.745 ************************************ 00:10:43.745 START TEST raid_write_error_test 00:10:43.745 ************************************ 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0oIWhvriJD 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69688 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69688 00:10:43.745 18:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69688 ']' 00:10:43.746 18:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.746 18:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.746 18:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.746 18:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.746 18:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.003 [2024-12-06 18:07:55.986174] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:10:44.003 [2024-12-06 18:07:55.986518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69688 ] 00:10:44.003 [2024-12-06 18:07:56.155673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.260 [2024-12-06 18:07:56.293205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.517 [2024-12-06 18:07:56.530551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.517 [2024-12-06 18:07:56.530606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.774 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.774 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:44.774 18:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.775 18:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:44.775 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.775 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.775 BaseBdev1_malloc 00:10:44.775 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.775 18:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:44.775 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.775 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 true 00:10:45.032 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.032 18:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:45.032 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.032 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 [2024-12-06 18:07:56.952893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:45.032 [2024-12-06 18:07:56.953051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.032 [2024-12-06 18:07:56.953098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:45.032 [2024-12-06 18:07:56.953113] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.032 [2024-12-06 18:07:56.955819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.032 [2024-12-06 18:07:56.955868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:45.032 BaseBdev1 00:10:45.032 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.032 18:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.032 18:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:45.032 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.032 18:07:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 BaseBdev2_malloc 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 true 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 [2024-12-06 18:07:57.027763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:45.032 [2024-12-06 18:07:57.027850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.032 [2024-12-06 18:07:57.027877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:45.032 [2024-12-06 18:07:57.027890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.032 [2024-12-06 18:07:57.030503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.032 [2024-12-06 18:07:57.030556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:45.032 BaseBdev2 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 BaseBdev3_malloc 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 true 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 [2024-12-06 18:07:57.117806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:45.032 [2024-12-06 18:07:57.117884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.032 [2024-12-06 18:07:57.117911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:45.032 [2024-12-06 18:07:57.117924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.032 [2024-12-06 18:07:57.120514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.032 [2024-12-06 18:07:57.120563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:45.032 BaseBdev3 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.032 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 [2024-12-06 18:07:57.129887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.032 [2024-12-06 18:07:57.132118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.032 [2024-12-06 18:07:57.132214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.032 [2024-12-06 18:07:57.132473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:45.033 [2024-12-06 18:07:57.132496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:45.033 [2024-12-06 18:07:57.132822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:45.033 [2024-12-06 18:07:57.133051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:45.033 [2024-12-06 18:07:57.133086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:45.033 [2024-12-06 18:07:57.133301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.033 "name": "raid_bdev1", 00:10:45.033 "uuid": "358eb6a5-6904-4369-9f0d-f3518f4d1e49", 00:10:45.033 "strip_size_kb": 0, 00:10:45.033 "state": "online", 00:10:45.033 "raid_level": "raid1", 00:10:45.033 "superblock": true, 00:10:45.033 "num_base_bdevs": 3, 00:10:45.033 "num_base_bdevs_discovered": 3, 00:10:45.033 "num_base_bdevs_operational": 3, 00:10:45.033 "base_bdevs_list": [ 00:10:45.033 { 00:10:45.033 "name": "BaseBdev1", 00:10:45.033 "uuid": "3baed843-2295-5874-8d23-8bc83719af0e", 00:10:45.033 "is_configured": true, 00:10:45.033 "data_offset": 2048, 00:10:45.033 "data_size": 63488 00:10:45.033 }, 00:10:45.033 { 00:10:45.033 "name": "BaseBdev2", 00:10:45.033 "uuid": "6b56be57-7ca0-5f64-a6fc-00674731f46f", 00:10:45.033 "is_configured": true, 00:10:45.033 "data_offset": 2048, 00:10:45.033 "data_size": 63488 00:10:45.033 }, 00:10:45.033 { 00:10:45.033 "name": "BaseBdev3", 00:10:45.033 "uuid": "5990fabb-2685-5d9f-b3af-fd10847e1a64", 00:10:45.033 "is_configured": true, 00:10:45.033 "data_offset": 2048, 00:10:45.033 "data_size": 63488 00:10:45.033 } 00:10:45.033 ] 00:10:45.033 }' 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.033 18:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.598 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:45.598 18:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:45.598 [2024-12-06 18:07:57.722532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.560 [2024-12-06 18:07:58.618733] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:46.560 [2024-12-06 18:07:58.618807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.560 [2024-12-06 18:07:58.619043] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.560 "name": "raid_bdev1", 00:10:46.560 "uuid": "358eb6a5-6904-4369-9f0d-f3518f4d1e49", 00:10:46.560 "strip_size_kb": 0, 00:10:46.560 "state": "online", 00:10:46.560 "raid_level": "raid1", 00:10:46.560 "superblock": true, 00:10:46.560 "num_base_bdevs": 3, 00:10:46.560 "num_base_bdevs_discovered": 2, 00:10:46.560 "num_base_bdevs_operational": 2, 00:10:46.560 "base_bdevs_list": [ 00:10:46.560 { 00:10:46.560 "name": null, 00:10:46.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.560 "is_configured": false, 00:10:46.560 "data_offset": 0, 00:10:46.560 "data_size": 63488 00:10:46.560 }, 00:10:46.560 { 00:10:46.560 "name": "BaseBdev2", 00:10:46.560 "uuid": "6b56be57-7ca0-5f64-a6fc-00674731f46f", 00:10:46.560 "is_configured": true, 00:10:46.560 "data_offset": 2048, 00:10:46.560 "data_size": 63488 00:10:46.560 }, 00:10:46.560 { 00:10:46.560 "name": "BaseBdev3", 00:10:46.560 "uuid": "5990fabb-2685-5d9f-b3af-fd10847e1a64", 00:10:46.560 "is_configured": true, 00:10:46.560 "data_offset": 2048, 00:10:46.560 "data_size": 63488 00:10:46.560 } 00:10:46.560 ] 00:10:46.560 }' 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.560 18:07:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.126 [2024-12-06 18:07:59.121715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.126 [2024-12-06 18:07:59.121759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.126 [2024-12-06 18:07:59.124974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.126 [2024-12-06 18:07:59.125056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.126 [2024-12-06 18:07:59.125160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.126 [2024-12-06 18:07:59.125186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.126 { 00:10:47.126 "results": [ 00:10:47.126 { 00:10:47.126 "job": "raid_bdev1", 00:10:47.126 "core_mask": "0x1", 00:10:47.126 "workload": "randrw", 00:10:47.126 "percentage": 50, 00:10:47.126 "status": "finished", 00:10:47.126 "queue_depth": 1, 00:10:47.126 "io_size": 131072, 00:10:47.126 "runtime": 1.399761, 00:10:47.126 "iops": 12202.083069895503, 00:10:47.126 "mibps": 1525.260383736938, 00:10:47.126 "io_failed": 0, 00:10:47.126 "io_timeout": 0, 00:10:47.126 "avg_latency_us": 78.61943446202307, 00:10:47.126 "min_latency_us": 30.183406113537117, 00:10:47.126 "max_latency_us": 1752.8733624454148 00:10:47.126 } 00:10:47.126 ], 00:10:47.126 "core_count": 1 00:10:47.126 } 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69688 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69688 ']' 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69688 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69688 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.126 killing process with pid 69688 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69688' 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69688 00:10:47.126 18:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69688 00:10:47.126 [2024-12-06 18:07:59.160419] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.384 [2024-12-06 18:07:59.441708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0oIWhvriJD 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:48.759 00:10:48.759 real 0m5.035s 00:10:48.759 user 0m6.042s 00:10:48.759 sys 0m0.566s 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.759 18:08:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.759 ************************************ 00:10:48.759 END TEST raid_write_error_test 00:10:48.759 ************************************ 00:10:48.759 18:08:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:48.759 18:08:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:48.759 18:08:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:49.018 18:08:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:49.018 18:08:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.018 18:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:49.018 ************************************ 00:10:49.018 START TEST raid_state_function_test 00:10:49.018 ************************************ 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69836 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69836' 00:10:49.018 Process raid pid: 69836 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69836 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69836 ']' 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.018 18:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.018 [2024-12-06 18:08:01.041497] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:10:49.018 [2024-12-06 18:08:01.041644] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.276 [2024-12-06 18:08:01.225556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.276 [2024-12-06 18:08:01.358583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.536 [2024-12-06 18:08:01.609118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.536 [2024-12-06 18:08:01.609175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.121 [2024-12-06 18:08:01.977269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.121 [2024-12-06 18:08:01.977344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.121 [2024-12-06 18:08:01.977356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.121 [2024-12-06 18:08:01.977368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.121 [2024-12-06 18:08:01.977376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.121 [2024-12-06 18:08:01.977386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.121 [2024-12-06 18:08:01.977393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:50.121 [2024-12-06 18:08:01.977404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.121 18:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.121 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.121 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.121 "name": "Existed_Raid", 00:10:50.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.121 "strip_size_kb": 64, 00:10:50.121 "state": "configuring", 00:10:50.121 "raid_level": "raid0", 00:10:50.121 "superblock": false, 00:10:50.121 "num_base_bdevs": 4, 00:10:50.121 "num_base_bdevs_discovered": 0, 00:10:50.121 "num_base_bdevs_operational": 4, 00:10:50.121 "base_bdevs_list": [ 00:10:50.121 { 00:10:50.121 "name": "BaseBdev1", 00:10:50.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.121 "is_configured": false, 00:10:50.121 "data_offset": 0, 00:10:50.121 "data_size": 0 00:10:50.121 }, 00:10:50.121 { 00:10:50.121 "name": "BaseBdev2", 00:10:50.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.121 "is_configured": false, 00:10:50.121 "data_offset": 0, 00:10:50.121 "data_size": 0 00:10:50.121 }, 00:10:50.121 { 00:10:50.121 "name": "BaseBdev3", 00:10:50.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.121 "is_configured": false, 00:10:50.121 "data_offset": 0, 00:10:50.121 "data_size": 0 00:10:50.121 }, 00:10:50.121 { 00:10:50.121 "name": "BaseBdev4", 00:10:50.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.121 "is_configured": false, 00:10:50.121 "data_offset": 0, 00:10:50.121 "data_size": 0 00:10:50.121 } 00:10:50.121 ] 00:10:50.121 }' 00:10:50.121 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.122 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.381 [2024-12-06 18:08:02.452394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.381 [2024-12-06 18:08:02.452451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.381 [2024-12-06 18:08:02.464397] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.381 [2024-12-06 18:08:02.464467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.381 [2024-12-06 18:08:02.464484] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.381 [2024-12-06 18:08:02.464500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.381 [2024-12-06 18:08:02.464510] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.381 [2024-12-06 18:08:02.464526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.381 [2024-12-06 18:08:02.464537] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:50.381 [2024-12-06 18:08:02.464554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.381 [2024-12-06 18:08:02.518640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.381 BaseBdev1 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.381 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.381 [ 00:10:50.381 { 00:10:50.381 "name": "BaseBdev1", 00:10:50.381 "aliases": [ 00:10:50.381 "d7dfea78-f388-4dfb-8b01-6b09b9eb97c3" 00:10:50.381 ], 00:10:50.381 "product_name": "Malloc disk", 00:10:50.381 "block_size": 512, 00:10:50.381 "num_blocks": 65536, 00:10:50.381 "uuid": "d7dfea78-f388-4dfb-8b01-6b09b9eb97c3", 00:10:50.381 "assigned_rate_limits": { 00:10:50.381 "rw_ios_per_sec": 0, 00:10:50.381 "rw_mbytes_per_sec": 0, 00:10:50.381 "r_mbytes_per_sec": 0, 00:10:50.381 "w_mbytes_per_sec": 0 00:10:50.381 }, 00:10:50.381 "claimed": true, 00:10:50.640 "claim_type": "exclusive_write", 00:10:50.640 "zoned": false, 00:10:50.640 "supported_io_types": { 00:10:50.640 "read": true, 00:10:50.640 "write": true, 00:10:50.640 "unmap": true, 00:10:50.640 "flush": true, 00:10:50.640 "reset": true, 00:10:50.640 "nvme_admin": false, 00:10:50.640 "nvme_io": false, 00:10:50.640 "nvme_io_md": false, 00:10:50.640 "write_zeroes": true, 00:10:50.640 "zcopy": true, 00:10:50.640 "get_zone_info": false, 00:10:50.640 "zone_management": false, 00:10:50.640 "zone_append": false, 00:10:50.640 "compare": false, 00:10:50.640 "compare_and_write": false, 00:10:50.640 "abort": true, 00:10:50.640 "seek_hole": false, 00:10:50.640 "seek_data": false, 00:10:50.640 "copy": true, 00:10:50.640 "nvme_iov_md": false 00:10:50.640 }, 00:10:50.640 "memory_domains": [ 00:10:50.640 { 00:10:50.640 "dma_device_id": "system", 00:10:50.640 "dma_device_type": 1 00:10:50.640 }, 00:10:50.640 { 00:10:50.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.640 "dma_device_type": 2 00:10:50.640 } 00:10:50.640 ], 00:10:50.640 "driver_specific": {} 00:10:50.640 } 00:10:50.640 ] 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.640 "name": "Existed_Raid", 00:10:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.640 "strip_size_kb": 64, 00:10:50.640 "state": "configuring", 00:10:50.640 "raid_level": "raid0", 00:10:50.640 "superblock": false, 00:10:50.640 "num_base_bdevs": 4, 00:10:50.640 "num_base_bdevs_discovered": 1, 00:10:50.640 "num_base_bdevs_operational": 4, 00:10:50.640 "base_bdevs_list": [ 00:10:50.640 { 00:10:50.640 "name": "BaseBdev1", 00:10:50.640 "uuid": "d7dfea78-f388-4dfb-8b01-6b09b9eb97c3", 00:10:50.640 "is_configured": true, 00:10:50.640 "data_offset": 0, 00:10:50.640 "data_size": 65536 00:10:50.640 }, 00:10:50.640 { 00:10:50.640 "name": "BaseBdev2", 00:10:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.640 "is_configured": false, 00:10:50.640 "data_offset": 0, 00:10:50.640 "data_size": 0 00:10:50.640 }, 00:10:50.640 { 00:10:50.640 "name": "BaseBdev3", 00:10:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.640 "is_configured": false, 00:10:50.640 "data_offset": 0, 00:10:50.640 "data_size": 0 00:10:50.640 }, 00:10:50.640 { 00:10:50.640 "name": "BaseBdev4", 00:10:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.640 "is_configured": false, 00:10:50.640 "data_offset": 0, 00:10:50.640 "data_size": 0 00:10:50.640 } 00:10:50.640 ] 00:10:50.640 }' 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.640 18:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.953 [2024-12-06 18:08:03.017879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.953 [2024-12-06 18:08:03.017968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.953 [2024-12-06 18:08:03.033962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.953 [2024-12-06 18:08:03.036214] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.953 [2024-12-06 18:08:03.036282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.953 [2024-12-06 18:08:03.036299] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.953 [2024-12-06 18:08:03.036316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.953 [2024-12-06 18:08:03.036329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:50.953 [2024-12-06 18:08:03.036345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.953 "name": "Existed_Raid", 00:10:50.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.953 "strip_size_kb": 64, 00:10:50.953 "state": "configuring", 00:10:50.953 "raid_level": "raid0", 00:10:50.953 "superblock": false, 00:10:50.953 "num_base_bdevs": 4, 00:10:50.953 "num_base_bdevs_discovered": 1, 00:10:50.953 "num_base_bdevs_operational": 4, 00:10:50.953 "base_bdevs_list": [ 00:10:50.953 { 00:10:50.953 "name": "BaseBdev1", 00:10:50.953 "uuid": "d7dfea78-f388-4dfb-8b01-6b09b9eb97c3", 00:10:50.953 "is_configured": true, 00:10:50.953 "data_offset": 0, 00:10:50.953 "data_size": 65536 00:10:50.953 }, 00:10:50.953 { 00:10:50.953 "name": "BaseBdev2", 00:10:50.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.953 "is_configured": false, 00:10:50.953 "data_offset": 0, 00:10:50.953 "data_size": 0 00:10:50.953 }, 00:10:50.953 { 00:10:50.953 "name": "BaseBdev3", 00:10:50.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.953 "is_configured": false, 00:10:50.953 "data_offset": 0, 00:10:50.953 "data_size": 0 00:10:50.953 }, 00:10:50.953 { 00:10:50.953 "name": "BaseBdev4", 00:10:50.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.953 "is_configured": false, 00:10:50.953 "data_offset": 0, 00:10:50.953 "data_size": 0 00:10:50.953 } 00:10:50.953 ] 00:10:50.953 }' 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.953 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.520 [2024-12-06 18:08:03.526612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.520 BaseBdev2 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.520 [ 00:10:51.520 { 00:10:51.520 "name": "BaseBdev2", 00:10:51.520 "aliases": [ 00:10:51.520 "7c1644fe-42f1-46e3-8d04-5697d708a61f" 00:10:51.520 ], 00:10:51.520 "product_name": "Malloc disk", 00:10:51.520 "block_size": 512, 00:10:51.520 "num_blocks": 65536, 00:10:51.520 "uuid": "7c1644fe-42f1-46e3-8d04-5697d708a61f", 00:10:51.520 "assigned_rate_limits": { 00:10:51.520 "rw_ios_per_sec": 0, 00:10:51.520 "rw_mbytes_per_sec": 0, 00:10:51.520 "r_mbytes_per_sec": 0, 00:10:51.520 "w_mbytes_per_sec": 0 00:10:51.520 }, 00:10:51.520 "claimed": true, 00:10:51.520 "claim_type": "exclusive_write", 00:10:51.520 "zoned": false, 00:10:51.520 "supported_io_types": { 00:10:51.520 "read": true, 00:10:51.520 "write": true, 00:10:51.520 "unmap": true, 00:10:51.520 "flush": true, 00:10:51.520 "reset": true, 00:10:51.520 "nvme_admin": false, 00:10:51.520 "nvme_io": false, 00:10:51.520 "nvme_io_md": false, 00:10:51.520 "write_zeroes": true, 00:10:51.520 "zcopy": true, 00:10:51.520 "get_zone_info": false, 00:10:51.520 "zone_management": false, 00:10:51.520 "zone_append": false, 00:10:51.520 "compare": false, 00:10:51.520 "compare_and_write": false, 00:10:51.520 "abort": true, 00:10:51.520 "seek_hole": false, 00:10:51.520 "seek_data": false, 00:10:51.520 "copy": true, 00:10:51.520 "nvme_iov_md": false 00:10:51.520 }, 00:10:51.520 "memory_domains": [ 00:10:51.520 { 00:10:51.520 "dma_device_id": "system", 00:10:51.520 "dma_device_type": 1 00:10:51.520 }, 00:10:51.520 { 00:10:51.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.520 "dma_device_type": 2 00:10:51.520 } 00:10:51.520 ], 00:10:51.520 "driver_specific": {} 00:10:51.520 } 00:10:51.520 ] 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.520 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.521 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.521 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.521 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.521 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.521 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.521 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.521 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.521 "name": "Existed_Raid", 00:10:51.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.521 "strip_size_kb": 64, 00:10:51.521 "state": "configuring", 00:10:51.521 "raid_level": "raid0", 00:10:51.521 "superblock": false, 00:10:51.521 "num_base_bdevs": 4, 00:10:51.521 "num_base_bdevs_discovered": 2, 00:10:51.521 "num_base_bdevs_operational": 4, 00:10:51.521 "base_bdevs_list": [ 00:10:51.521 { 00:10:51.521 "name": "BaseBdev1", 00:10:51.521 "uuid": "d7dfea78-f388-4dfb-8b01-6b09b9eb97c3", 00:10:51.521 "is_configured": true, 00:10:51.521 "data_offset": 0, 00:10:51.521 "data_size": 65536 00:10:51.521 }, 00:10:51.521 { 00:10:51.521 "name": "BaseBdev2", 00:10:51.521 "uuid": "7c1644fe-42f1-46e3-8d04-5697d708a61f", 00:10:51.521 "is_configured": true, 00:10:51.521 "data_offset": 0, 00:10:51.521 "data_size": 65536 00:10:51.521 }, 00:10:51.521 { 00:10:51.521 "name": "BaseBdev3", 00:10:51.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.521 "is_configured": false, 00:10:51.521 "data_offset": 0, 00:10:51.521 "data_size": 0 00:10:51.521 }, 00:10:51.521 { 00:10:51.521 "name": "BaseBdev4", 00:10:51.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.521 "is_configured": false, 00:10:51.521 "data_offset": 0, 00:10:51.521 "data_size": 0 00:10:51.521 } 00:10:51.521 ] 00:10:51.521 }' 00:10:51.521 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.521 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.087 18:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:52.087 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.087 18:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.087 [2024-12-06 18:08:04.059358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.087 BaseBdev3 00:10:52.087 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.087 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.088 [ 00:10:52.088 { 00:10:52.088 "name": "BaseBdev3", 00:10:52.088 "aliases": [ 00:10:52.088 "3934a9a4-3357-478c-a19d-a9ffdf6e22f6" 00:10:52.088 ], 00:10:52.088 "product_name": "Malloc disk", 00:10:52.088 "block_size": 512, 00:10:52.088 "num_blocks": 65536, 00:10:52.088 "uuid": "3934a9a4-3357-478c-a19d-a9ffdf6e22f6", 00:10:52.088 "assigned_rate_limits": { 00:10:52.088 "rw_ios_per_sec": 0, 00:10:52.088 "rw_mbytes_per_sec": 0, 00:10:52.088 "r_mbytes_per_sec": 0, 00:10:52.088 "w_mbytes_per_sec": 0 00:10:52.088 }, 00:10:52.088 "claimed": true, 00:10:52.088 "claim_type": "exclusive_write", 00:10:52.088 "zoned": false, 00:10:52.088 "supported_io_types": { 00:10:52.088 "read": true, 00:10:52.088 "write": true, 00:10:52.088 "unmap": true, 00:10:52.088 "flush": true, 00:10:52.088 "reset": true, 00:10:52.088 "nvme_admin": false, 00:10:52.088 "nvme_io": false, 00:10:52.088 "nvme_io_md": false, 00:10:52.088 "write_zeroes": true, 00:10:52.088 "zcopy": true, 00:10:52.088 "get_zone_info": false, 00:10:52.088 "zone_management": false, 00:10:52.088 "zone_append": false, 00:10:52.088 "compare": false, 00:10:52.088 "compare_and_write": false, 00:10:52.088 "abort": true, 00:10:52.088 "seek_hole": false, 00:10:52.088 "seek_data": false, 00:10:52.088 "copy": true, 00:10:52.088 "nvme_iov_md": false 00:10:52.088 }, 00:10:52.088 "memory_domains": [ 00:10:52.088 { 00:10:52.088 "dma_device_id": "system", 00:10:52.088 "dma_device_type": 1 00:10:52.088 }, 00:10:52.088 { 00:10:52.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.088 "dma_device_type": 2 00:10:52.088 } 00:10:52.088 ], 00:10:52.088 "driver_specific": {} 00:10:52.088 } 00:10:52.088 ] 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.088 "name": "Existed_Raid", 00:10:52.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.088 "strip_size_kb": 64, 00:10:52.088 "state": "configuring", 00:10:52.088 "raid_level": "raid0", 00:10:52.088 "superblock": false, 00:10:52.088 "num_base_bdevs": 4, 00:10:52.088 "num_base_bdevs_discovered": 3, 00:10:52.088 "num_base_bdevs_operational": 4, 00:10:52.088 "base_bdevs_list": [ 00:10:52.088 { 00:10:52.088 "name": "BaseBdev1", 00:10:52.088 "uuid": "d7dfea78-f388-4dfb-8b01-6b09b9eb97c3", 00:10:52.088 "is_configured": true, 00:10:52.088 "data_offset": 0, 00:10:52.088 "data_size": 65536 00:10:52.088 }, 00:10:52.088 { 00:10:52.088 "name": "BaseBdev2", 00:10:52.088 "uuid": "7c1644fe-42f1-46e3-8d04-5697d708a61f", 00:10:52.088 "is_configured": true, 00:10:52.088 "data_offset": 0, 00:10:52.088 "data_size": 65536 00:10:52.088 }, 00:10:52.088 { 00:10:52.088 "name": "BaseBdev3", 00:10:52.088 "uuid": "3934a9a4-3357-478c-a19d-a9ffdf6e22f6", 00:10:52.088 "is_configured": true, 00:10:52.088 "data_offset": 0, 00:10:52.088 "data_size": 65536 00:10:52.088 }, 00:10:52.088 { 00:10:52.088 "name": "BaseBdev4", 00:10:52.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.088 "is_configured": false, 00:10:52.088 "data_offset": 0, 00:10:52.088 "data_size": 0 00:10:52.088 } 00:10:52.088 ] 00:10:52.088 }' 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.088 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.657 BaseBdev4 00:10:52.657 [2024-12-06 18:08:04.584396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.657 [2024-12-06 18:08:04.584448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:52.657 [2024-12-06 18:08:04.584459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:52.657 [2024-12-06 18:08:04.584789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:52.657 [2024-12-06 18:08:04.584972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:52.657 [2024-12-06 18:08:04.584986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:52.657 [2024-12-06 18:08:04.585350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.657 [ 00:10:52.657 { 00:10:52.657 "name": "BaseBdev4", 00:10:52.657 "aliases": [ 00:10:52.657 "b803c809-7a1f-4306-b0b5-4f22ac02c633" 00:10:52.657 ], 00:10:52.657 "product_name": "Malloc disk", 00:10:52.657 "block_size": 512, 00:10:52.657 "num_blocks": 65536, 00:10:52.657 "uuid": "b803c809-7a1f-4306-b0b5-4f22ac02c633", 00:10:52.657 "assigned_rate_limits": { 00:10:52.657 "rw_ios_per_sec": 0, 00:10:52.657 "rw_mbytes_per_sec": 0, 00:10:52.657 "r_mbytes_per_sec": 0, 00:10:52.657 "w_mbytes_per_sec": 0 00:10:52.657 }, 00:10:52.657 "claimed": true, 00:10:52.657 "claim_type": "exclusive_write", 00:10:52.657 "zoned": false, 00:10:52.657 "supported_io_types": { 00:10:52.657 "read": true, 00:10:52.657 "write": true, 00:10:52.657 "unmap": true, 00:10:52.657 "flush": true, 00:10:52.657 "reset": true, 00:10:52.657 "nvme_admin": false, 00:10:52.657 "nvme_io": false, 00:10:52.657 "nvme_io_md": false, 00:10:52.657 "write_zeroes": true, 00:10:52.657 "zcopy": true, 00:10:52.657 "get_zone_info": false, 00:10:52.657 "zone_management": false, 00:10:52.657 "zone_append": false, 00:10:52.657 "compare": false, 00:10:52.657 "compare_and_write": false, 00:10:52.657 "abort": true, 00:10:52.657 "seek_hole": false, 00:10:52.657 "seek_data": false, 00:10:52.657 "copy": true, 00:10:52.657 "nvme_iov_md": false 00:10:52.657 }, 00:10:52.657 "memory_domains": [ 00:10:52.657 { 00:10:52.657 "dma_device_id": "system", 00:10:52.657 "dma_device_type": 1 00:10:52.657 }, 00:10:52.657 { 00:10:52.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.657 "dma_device_type": 2 00:10:52.657 } 00:10:52.657 ], 00:10:52.657 "driver_specific": {} 00:10:52.657 } 00:10:52.657 ] 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.657 "name": "Existed_Raid", 00:10:52.657 "uuid": "6d2650d1-1b86-41f5-8c03-a1e057cc259a", 00:10:52.657 "strip_size_kb": 64, 00:10:52.657 "state": "online", 00:10:52.657 "raid_level": "raid0", 00:10:52.657 "superblock": false, 00:10:52.657 "num_base_bdevs": 4, 00:10:52.657 "num_base_bdevs_discovered": 4, 00:10:52.657 "num_base_bdevs_operational": 4, 00:10:52.657 "base_bdevs_list": [ 00:10:52.657 { 00:10:52.657 "name": "BaseBdev1", 00:10:52.657 "uuid": "d7dfea78-f388-4dfb-8b01-6b09b9eb97c3", 00:10:52.657 "is_configured": true, 00:10:52.657 "data_offset": 0, 00:10:52.657 "data_size": 65536 00:10:52.657 }, 00:10:52.657 { 00:10:52.657 "name": "BaseBdev2", 00:10:52.657 "uuid": "7c1644fe-42f1-46e3-8d04-5697d708a61f", 00:10:52.657 "is_configured": true, 00:10:52.657 "data_offset": 0, 00:10:52.657 "data_size": 65536 00:10:52.657 }, 00:10:52.657 { 00:10:52.657 "name": "BaseBdev3", 00:10:52.657 "uuid": "3934a9a4-3357-478c-a19d-a9ffdf6e22f6", 00:10:52.657 "is_configured": true, 00:10:52.657 "data_offset": 0, 00:10:52.657 "data_size": 65536 00:10:52.657 }, 00:10:52.657 { 00:10:52.657 "name": "BaseBdev4", 00:10:52.657 "uuid": "b803c809-7a1f-4306-b0b5-4f22ac02c633", 00:10:52.657 "is_configured": true, 00:10:52.657 "data_offset": 0, 00:10:52.657 "data_size": 65536 00:10:52.657 } 00:10:52.657 ] 00:10:52.657 }' 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.657 18:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.915 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.915 [2024-12-06 18:08:05.076102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.174 "name": "Existed_Raid", 00:10:53.174 "aliases": [ 00:10:53.174 "6d2650d1-1b86-41f5-8c03-a1e057cc259a" 00:10:53.174 ], 00:10:53.174 "product_name": "Raid Volume", 00:10:53.174 "block_size": 512, 00:10:53.174 "num_blocks": 262144, 00:10:53.174 "uuid": "6d2650d1-1b86-41f5-8c03-a1e057cc259a", 00:10:53.174 "assigned_rate_limits": { 00:10:53.174 "rw_ios_per_sec": 0, 00:10:53.174 "rw_mbytes_per_sec": 0, 00:10:53.174 "r_mbytes_per_sec": 0, 00:10:53.174 "w_mbytes_per_sec": 0 00:10:53.174 }, 00:10:53.174 "claimed": false, 00:10:53.174 "zoned": false, 00:10:53.174 "supported_io_types": { 00:10:53.174 "read": true, 00:10:53.174 "write": true, 00:10:53.174 "unmap": true, 00:10:53.174 "flush": true, 00:10:53.174 "reset": true, 00:10:53.174 "nvme_admin": false, 00:10:53.174 "nvme_io": false, 00:10:53.174 "nvme_io_md": false, 00:10:53.174 "write_zeroes": true, 00:10:53.174 "zcopy": false, 00:10:53.174 "get_zone_info": false, 00:10:53.174 "zone_management": false, 00:10:53.174 "zone_append": false, 00:10:53.174 "compare": false, 00:10:53.174 "compare_and_write": false, 00:10:53.174 "abort": false, 00:10:53.174 "seek_hole": false, 00:10:53.174 "seek_data": false, 00:10:53.174 "copy": false, 00:10:53.174 "nvme_iov_md": false 00:10:53.174 }, 00:10:53.174 "memory_domains": [ 00:10:53.174 { 00:10:53.174 "dma_device_id": "system", 00:10:53.174 "dma_device_type": 1 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.174 "dma_device_type": 2 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "dma_device_id": "system", 00:10:53.174 "dma_device_type": 1 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.174 "dma_device_type": 2 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "dma_device_id": "system", 00:10:53.174 "dma_device_type": 1 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.174 "dma_device_type": 2 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "dma_device_id": "system", 00:10:53.174 "dma_device_type": 1 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.174 "dma_device_type": 2 00:10:53.174 } 00:10:53.174 ], 00:10:53.174 "driver_specific": { 00:10:53.174 "raid": { 00:10:53.174 "uuid": "6d2650d1-1b86-41f5-8c03-a1e057cc259a", 00:10:53.174 "strip_size_kb": 64, 00:10:53.174 "state": "online", 00:10:53.174 "raid_level": "raid0", 00:10:53.174 "superblock": false, 00:10:53.174 "num_base_bdevs": 4, 00:10:53.174 "num_base_bdevs_discovered": 4, 00:10:53.174 "num_base_bdevs_operational": 4, 00:10:53.174 "base_bdevs_list": [ 00:10:53.174 { 00:10:53.174 "name": "BaseBdev1", 00:10:53.174 "uuid": "d7dfea78-f388-4dfb-8b01-6b09b9eb97c3", 00:10:53.174 "is_configured": true, 00:10:53.174 "data_offset": 0, 00:10:53.174 "data_size": 65536 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "name": "BaseBdev2", 00:10:53.174 "uuid": "7c1644fe-42f1-46e3-8d04-5697d708a61f", 00:10:53.174 "is_configured": true, 00:10:53.174 "data_offset": 0, 00:10:53.174 "data_size": 65536 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "name": "BaseBdev3", 00:10:53.174 "uuid": "3934a9a4-3357-478c-a19d-a9ffdf6e22f6", 00:10:53.174 "is_configured": true, 00:10:53.174 "data_offset": 0, 00:10:53.174 "data_size": 65536 00:10:53.174 }, 00:10:53.174 { 00:10:53.174 "name": "BaseBdev4", 00:10:53.174 "uuid": "b803c809-7a1f-4306-b0b5-4f22ac02c633", 00:10:53.174 "is_configured": true, 00:10:53.174 "data_offset": 0, 00:10:53.174 "data_size": 65536 00:10:53.174 } 00:10:53.174 ] 00:10:53.174 } 00:10:53.174 } 00:10:53.174 }' 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:53.174 BaseBdev2 00:10:53.174 BaseBdev3 00:10:53.174 BaseBdev4' 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.174 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.175 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.433 [2024-12-06 18:08:05.387490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.433 [2024-12-06 18:08:05.387540] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.433 [2024-12-06 18:08:05.387604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.433 "name": "Existed_Raid", 00:10:53.433 "uuid": "6d2650d1-1b86-41f5-8c03-a1e057cc259a", 00:10:53.433 "strip_size_kb": 64, 00:10:53.433 "state": "offline", 00:10:53.433 "raid_level": "raid0", 00:10:53.433 "superblock": false, 00:10:53.433 "num_base_bdevs": 4, 00:10:53.433 "num_base_bdevs_discovered": 3, 00:10:53.433 "num_base_bdevs_operational": 3, 00:10:53.433 "base_bdevs_list": [ 00:10:53.433 { 00:10:53.433 "name": null, 00:10:53.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.433 "is_configured": false, 00:10:53.433 "data_offset": 0, 00:10:53.433 "data_size": 65536 00:10:53.433 }, 00:10:53.433 { 00:10:53.433 "name": "BaseBdev2", 00:10:53.433 "uuid": "7c1644fe-42f1-46e3-8d04-5697d708a61f", 00:10:53.433 "is_configured": true, 00:10:53.433 "data_offset": 0, 00:10:53.433 "data_size": 65536 00:10:53.433 }, 00:10:53.433 { 00:10:53.433 "name": "BaseBdev3", 00:10:53.433 "uuid": "3934a9a4-3357-478c-a19d-a9ffdf6e22f6", 00:10:53.433 "is_configured": true, 00:10:53.433 "data_offset": 0, 00:10:53.433 "data_size": 65536 00:10:53.433 }, 00:10:53.433 { 00:10:53.433 "name": "BaseBdev4", 00:10:53.433 "uuid": "b803c809-7a1f-4306-b0b5-4f22ac02c633", 00:10:53.433 "is_configured": true, 00:10:53.433 "data_offset": 0, 00:10:53.433 "data_size": 65536 00:10:53.433 } 00:10:53.433 ] 00:10:53.433 }' 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.433 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.000 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:54.000 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.000 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.000 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.000 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.000 18:08:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:54.000 18:08:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.000 [2024-12-06 18:08:06.015462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.000 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.259 [2024-12-06 18:08:06.187119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.259 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.259 [2024-12-06 18:08:06.360270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:54.259 [2024-12-06 18:08:06.360341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.519 BaseBdev2 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.519 [ 00:10:54.519 { 00:10:54.519 "name": "BaseBdev2", 00:10:54.519 "aliases": [ 00:10:54.519 "ddc02281-f885-4955-a318-e5805b54fa75" 00:10:54.519 ], 00:10:54.519 "product_name": "Malloc disk", 00:10:54.519 "block_size": 512, 00:10:54.519 "num_blocks": 65536, 00:10:54.519 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:54.519 "assigned_rate_limits": { 00:10:54.519 "rw_ios_per_sec": 0, 00:10:54.519 "rw_mbytes_per_sec": 0, 00:10:54.519 "r_mbytes_per_sec": 0, 00:10:54.519 "w_mbytes_per_sec": 0 00:10:54.519 }, 00:10:54.519 "claimed": false, 00:10:54.519 "zoned": false, 00:10:54.519 "supported_io_types": { 00:10:54.519 "read": true, 00:10:54.519 "write": true, 00:10:54.519 "unmap": true, 00:10:54.519 "flush": true, 00:10:54.519 "reset": true, 00:10:54.519 "nvme_admin": false, 00:10:54.519 "nvme_io": false, 00:10:54.519 "nvme_io_md": false, 00:10:54.519 "write_zeroes": true, 00:10:54.519 "zcopy": true, 00:10:54.519 "get_zone_info": false, 00:10:54.519 "zone_management": false, 00:10:54.519 "zone_append": false, 00:10:54.519 "compare": false, 00:10:54.519 "compare_and_write": false, 00:10:54.519 "abort": true, 00:10:54.519 "seek_hole": false, 00:10:54.519 "seek_data": false, 00:10:54.519 "copy": true, 00:10:54.519 "nvme_iov_md": false 00:10:54.519 }, 00:10:54.519 "memory_domains": [ 00:10:54.519 { 00:10:54.519 "dma_device_id": "system", 00:10:54.519 "dma_device_type": 1 00:10:54.519 }, 00:10:54.519 { 00:10:54.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.519 "dma_device_type": 2 00:10:54.519 } 00:10:54.519 ], 00:10:54.519 "driver_specific": {} 00:10:54.519 } 00:10:54.519 ] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.519 BaseBdev3 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.519 [ 00:10:54.519 { 00:10:54.519 "name": "BaseBdev3", 00:10:54.519 "aliases": [ 00:10:54.519 "bb092a6b-4960-4c35-85ae-4b5a2009cbc7" 00:10:54.519 ], 00:10:54.519 "product_name": "Malloc disk", 00:10:54.519 "block_size": 512, 00:10:54.519 "num_blocks": 65536, 00:10:54.519 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:54.519 "assigned_rate_limits": { 00:10:54.519 "rw_ios_per_sec": 0, 00:10:54.519 "rw_mbytes_per_sec": 0, 00:10:54.519 "r_mbytes_per_sec": 0, 00:10:54.519 "w_mbytes_per_sec": 0 00:10:54.519 }, 00:10:54.519 "claimed": false, 00:10:54.519 "zoned": false, 00:10:54.519 "supported_io_types": { 00:10:54.519 "read": true, 00:10:54.519 "write": true, 00:10:54.519 "unmap": true, 00:10:54.519 "flush": true, 00:10:54.519 "reset": true, 00:10:54.519 "nvme_admin": false, 00:10:54.519 "nvme_io": false, 00:10:54.519 "nvme_io_md": false, 00:10:54.519 "write_zeroes": true, 00:10:54.519 "zcopy": true, 00:10:54.519 "get_zone_info": false, 00:10:54.519 "zone_management": false, 00:10:54.519 "zone_append": false, 00:10:54.519 "compare": false, 00:10:54.519 "compare_and_write": false, 00:10:54.519 "abort": true, 00:10:54.519 "seek_hole": false, 00:10:54.519 "seek_data": false, 00:10:54.519 "copy": true, 00:10:54.519 "nvme_iov_md": false 00:10:54.519 }, 00:10:54.519 "memory_domains": [ 00:10:54.519 { 00:10:54.519 "dma_device_id": "system", 00:10:54.519 "dma_device_type": 1 00:10:54.519 }, 00:10:54.519 { 00:10:54.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.519 "dma_device_type": 2 00:10:54.519 } 00:10:54.519 ], 00:10:54.519 "driver_specific": {} 00:10:54.519 } 00:10:54.519 ] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.519 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.778 BaseBdev4 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.778 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.778 [ 00:10:54.778 { 00:10:54.778 "name": "BaseBdev4", 00:10:54.778 "aliases": [ 00:10:54.778 "53f618d5-1752-44fa-8331-267b7863b1cf" 00:10:54.778 ], 00:10:54.778 "product_name": "Malloc disk", 00:10:54.778 "block_size": 512, 00:10:54.778 "num_blocks": 65536, 00:10:54.778 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:54.778 "assigned_rate_limits": { 00:10:54.778 "rw_ios_per_sec": 0, 00:10:54.778 "rw_mbytes_per_sec": 0, 00:10:54.778 "r_mbytes_per_sec": 0, 00:10:54.778 "w_mbytes_per_sec": 0 00:10:54.778 }, 00:10:54.778 "claimed": false, 00:10:54.778 "zoned": false, 00:10:54.778 "supported_io_types": { 00:10:54.778 "read": true, 00:10:54.778 "write": true, 00:10:54.778 "unmap": true, 00:10:54.778 "flush": true, 00:10:54.778 "reset": true, 00:10:54.778 "nvme_admin": false, 00:10:54.778 "nvme_io": false, 00:10:54.778 "nvme_io_md": false, 00:10:54.778 "write_zeroes": true, 00:10:54.779 "zcopy": true, 00:10:54.779 "get_zone_info": false, 00:10:54.779 "zone_management": false, 00:10:54.779 "zone_append": false, 00:10:54.779 "compare": false, 00:10:54.779 "compare_and_write": false, 00:10:54.779 "abort": true, 00:10:54.779 "seek_hole": false, 00:10:54.779 "seek_data": false, 00:10:54.779 "copy": true, 00:10:54.779 "nvme_iov_md": false 00:10:54.779 }, 00:10:54.779 "memory_domains": [ 00:10:54.779 { 00:10:54.779 "dma_device_id": "system", 00:10:54.779 "dma_device_type": 1 00:10:54.779 }, 00:10:54.779 { 00:10:54.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.779 "dma_device_type": 2 00:10:54.779 } 00:10:54.779 ], 00:10:54.779 "driver_specific": {} 00:10:54.779 } 00:10:54.779 ] 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.779 [2024-12-06 18:08:06.738401] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.779 [2024-12-06 18:08:06.738558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.779 [2024-12-06 18:08:06.738618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.779 [2024-12-06 18:08:06.740957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.779 [2024-12-06 18:08:06.741124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.779 "name": "Existed_Raid", 00:10:54.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.779 "strip_size_kb": 64, 00:10:54.779 "state": "configuring", 00:10:54.779 "raid_level": "raid0", 00:10:54.779 "superblock": false, 00:10:54.779 "num_base_bdevs": 4, 00:10:54.779 "num_base_bdevs_discovered": 3, 00:10:54.779 "num_base_bdevs_operational": 4, 00:10:54.779 "base_bdevs_list": [ 00:10:54.779 { 00:10:54.779 "name": "BaseBdev1", 00:10:54.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.779 "is_configured": false, 00:10:54.779 "data_offset": 0, 00:10:54.779 "data_size": 0 00:10:54.779 }, 00:10:54.779 { 00:10:54.779 "name": "BaseBdev2", 00:10:54.779 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:54.779 "is_configured": true, 00:10:54.779 "data_offset": 0, 00:10:54.779 "data_size": 65536 00:10:54.779 }, 00:10:54.779 { 00:10:54.779 "name": "BaseBdev3", 00:10:54.779 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:54.779 "is_configured": true, 00:10:54.779 "data_offset": 0, 00:10:54.779 "data_size": 65536 00:10:54.779 }, 00:10:54.779 { 00:10:54.779 "name": "BaseBdev4", 00:10:54.779 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:54.779 "is_configured": true, 00:10:54.779 "data_offset": 0, 00:10:54.779 "data_size": 65536 00:10:54.779 } 00:10:54.779 ] 00:10:54.779 }' 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.779 18:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.345 [2024-12-06 18:08:07.233774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.345 "name": "Existed_Raid", 00:10:55.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.345 "strip_size_kb": 64, 00:10:55.345 "state": "configuring", 00:10:55.345 "raid_level": "raid0", 00:10:55.345 "superblock": false, 00:10:55.345 "num_base_bdevs": 4, 00:10:55.345 "num_base_bdevs_discovered": 2, 00:10:55.345 "num_base_bdevs_operational": 4, 00:10:55.345 "base_bdevs_list": [ 00:10:55.345 { 00:10:55.345 "name": "BaseBdev1", 00:10:55.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.345 "is_configured": false, 00:10:55.345 "data_offset": 0, 00:10:55.345 "data_size": 0 00:10:55.345 }, 00:10:55.345 { 00:10:55.345 "name": null, 00:10:55.345 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:55.345 "is_configured": false, 00:10:55.345 "data_offset": 0, 00:10:55.345 "data_size": 65536 00:10:55.345 }, 00:10:55.345 { 00:10:55.345 "name": "BaseBdev3", 00:10:55.345 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:55.345 "is_configured": true, 00:10:55.345 "data_offset": 0, 00:10:55.345 "data_size": 65536 00:10:55.345 }, 00:10:55.345 { 00:10:55.345 "name": "BaseBdev4", 00:10:55.345 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:55.345 "is_configured": true, 00:10:55.345 "data_offset": 0, 00:10:55.345 "data_size": 65536 00:10:55.345 } 00:10:55.345 ] 00:10:55.345 }' 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.345 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.602 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.602 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:55.602 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.602 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.602 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.602 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:55.602 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.602 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.602 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.862 [2024-12-06 18:08:07.777471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.862 BaseBdev1 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.862 [ 00:10:55.862 { 00:10:55.862 "name": "BaseBdev1", 00:10:55.862 "aliases": [ 00:10:55.862 "ff1beb52-dc3b-4a38-9847-7692e00c995e" 00:10:55.862 ], 00:10:55.862 "product_name": "Malloc disk", 00:10:55.862 "block_size": 512, 00:10:55.862 "num_blocks": 65536, 00:10:55.862 "uuid": "ff1beb52-dc3b-4a38-9847-7692e00c995e", 00:10:55.862 "assigned_rate_limits": { 00:10:55.862 "rw_ios_per_sec": 0, 00:10:55.862 "rw_mbytes_per_sec": 0, 00:10:55.862 "r_mbytes_per_sec": 0, 00:10:55.862 "w_mbytes_per_sec": 0 00:10:55.862 }, 00:10:55.862 "claimed": true, 00:10:55.862 "claim_type": "exclusive_write", 00:10:55.862 "zoned": false, 00:10:55.862 "supported_io_types": { 00:10:55.862 "read": true, 00:10:55.862 "write": true, 00:10:55.862 "unmap": true, 00:10:55.862 "flush": true, 00:10:55.862 "reset": true, 00:10:55.862 "nvme_admin": false, 00:10:55.862 "nvme_io": false, 00:10:55.862 "nvme_io_md": false, 00:10:55.862 "write_zeroes": true, 00:10:55.862 "zcopy": true, 00:10:55.862 "get_zone_info": false, 00:10:55.862 "zone_management": false, 00:10:55.862 "zone_append": false, 00:10:55.862 "compare": false, 00:10:55.862 "compare_and_write": false, 00:10:55.862 "abort": true, 00:10:55.862 "seek_hole": false, 00:10:55.862 "seek_data": false, 00:10:55.862 "copy": true, 00:10:55.862 "nvme_iov_md": false 00:10:55.862 }, 00:10:55.862 "memory_domains": [ 00:10:55.862 { 00:10:55.862 "dma_device_id": "system", 00:10:55.862 "dma_device_type": 1 00:10:55.862 }, 00:10:55.862 { 00:10:55.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.862 "dma_device_type": 2 00:10:55.862 } 00:10:55.862 ], 00:10:55.862 "driver_specific": {} 00:10:55.862 } 00:10:55.862 ] 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.862 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.862 "name": "Existed_Raid", 00:10:55.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.862 "strip_size_kb": 64, 00:10:55.862 "state": "configuring", 00:10:55.862 "raid_level": "raid0", 00:10:55.863 "superblock": false, 00:10:55.863 "num_base_bdevs": 4, 00:10:55.863 "num_base_bdevs_discovered": 3, 00:10:55.863 "num_base_bdevs_operational": 4, 00:10:55.863 "base_bdevs_list": [ 00:10:55.863 { 00:10:55.863 "name": "BaseBdev1", 00:10:55.863 "uuid": "ff1beb52-dc3b-4a38-9847-7692e00c995e", 00:10:55.863 "is_configured": true, 00:10:55.863 "data_offset": 0, 00:10:55.863 "data_size": 65536 00:10:55.863 }, 00:10:55.863 { 00:10:55.863 "name": null, 00:10:55.863 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:55.863 "is_configured": false, 00:10:55.863 "data_offset": 0, 00:10:55.863 "data_size": 65536 00:10:55.863 }, 00:10:55.863 { 00:10:55.863 "name": "BaseBdev3", 00:10:55.863 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:55.863 "is_configured": true, 00:10:55.863 "data_offset": 0, 00:10:55.863 "data_size": 65536 00:10:55.863 }, 00:10:55.863 { 00:10:55.863 "name": "BaseBdev4", 00:10:55.863 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:55.863 "is_configured": true, 00:10:55.863 "data_offset": 0, 00:10:55.863 "data_size": 65536 00:10:55.863 } 00:10:55.863 ] 00:10:55.863 }' 00:10:55.863 18:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.863 18:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.120 [2024-12-06 18:08:08.276848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.120 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.121 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.121 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.121 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.121 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.121 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.121 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.121 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.121 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.121 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.378 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.378 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.378 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.378 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.378 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.378 "name": "Existed_Raid", 00:10:56.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.378 "strip_size_kb": 64, 00:10:56.378 "state": "configuring", 00:10:56.378 "raid_level": "raid0", 00:10:56.378 "superblock": false, 00:10:56.378 "num_base_bdevs": 4, 00:10:56.378 "num_base_bdevs_discovered": 2, 00:10:56.378 "num_base_bdevs_operational": 4, 00:10:56.378 "base_bdevs_list": [ 00:10:56.378 { 00:10:56.378 "name": "BaseBdev1", 00:10:56.378 "uuid": "ff1beb52-dc3b-4a38-9847-7692e00c995e", 00:10:56.378 "is_configured": true, 00:10:56.378 "data_offset": 0, 00:10:56.378 "data_size": 65536 00:10:56.378 }, 00:10:56.378 { 00:10:56.378 "name": null, 00:10:56.378 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:56.378 "is_configured": false, 00:10:56.378 "data_offset": 0, 00:10:56.378 "data_size": 65536 00:10:56.378 }, 00:10:56.378 { 00:10:56.378 "name": null, 00:10:56.378 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:56.378 "is_configured": false, 00:10:56.378 "data_offset": 0, 00:10:56.378 "data_size": 65536 00:10:56.378 }, 00:10:56.378 { 00:10:56.378 "name": "BaseBdev4", 00:10:56.378 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:56.378 "is_configured": true, 00:10:56.378 "data_offset": 0, 00:10:56.378 "data_size": 65536 00:10:56.378 } 00:10:56.378 ] 00:10:56.378 }' 00:10:56.378 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.378 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.635 [2024-12-06 18:08:08.760040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.635 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.636 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.636 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.636 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.636 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.938 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.938 "name": "Existed_Raid", 00:10:56.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.938 "strip_size_kb": 64, 00:10:56.938 "state": "configuring", 00:10:56.938 "raid_level": "raid0", 00:10:56.938 "superblock": false, 00:10:56.938 "num_base_bdevs": 4, 00:10:56.938 "num_base_bdevs_discovered": 3, 00:10:56.938 "num_base_bdevs_operational": 4, 00:10:56.938 "base_bdevs_list": [ 00:10:56.938 { 00:10:56.938 "name": "BaseBdev1", 00:10:56.938 "uuid": "ff1beb52-dc3b-4a38-9847-7692e00c995e", 00:10:56.938 "is_configured": true, 00:10:56.938 "data_offset": 0, 00:10:56.938 "data_size": 65536 00:10:56.938 }, 00:10:56.938 { 00:10:56.938 "name": null, 00:10:56.938 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:56.938 "is_configured": false, 00:10:56.938 "data_offset": 0, 00:10:56.938 "data_size": 65536 00:10:56.938 }, 00:10:56.938 { 00:10:56.938 "name": "BaseBdev3", 00:10:56.938 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:56.938 "is_configured": true, 00:10:56.938 "data_offset": 0, 00:10:56.938 "data_size": 65536 00:10:56.938 }, 00:10:56.938 { 00:10:56.938 "name": "BaseBdev4", 00:10:56.938 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:56.938 "is_configured": true, 00:10:56.938 "data_offset": 0, 00:10:56.938 "data_size": 65536 00:10:56.938 } 00:10:56.938 ] 00:10:56.938 }' 00:10:56.938 18:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.938 18:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.195 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.195 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.195 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.195 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:57.195 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.195 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.195 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.195 [2024-12-06 18:08:09.247463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.453 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.453 "name": "Existed_Raid", 00:10:57.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.453 "strip_size_kb": 64, 00:10:57.453 "state": "configuring", 00:10:57.453 "raid_level": "raid0", 00:10:57.453 "superblock": false, 00:10:57.453 "num_base_bdevs": 4, 00:10:57.453 "num_base_bdevs_discovered": 2, 00:10:57.453 "num_base_bdevs_operational": 4, 00:10:57.453 "base_bdevs_list": [ 00:10:57.453 { 00:10:57.453 "name": null, 00:10:57.453 "uuid": "ff1beb52-dc3b-4a38-9847-7692e00c995e", 00:10:57.453 "is_configured": false, 00:10:57.453 "data_offset": 0, 00:10:57.453 "data_size": 65536 00:10:57.453 }, 00:10:57.453 { 00:10:57.453 "name": null, 00:10:57.453 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:57.453 "is_configured": false, 00:10:57.453 "data_offset": 0, 00:10:57.453 "data_size": 65536 00:10:57.453 }, 00:10:57.453 { 00:10:57.453 "name": "BaseBdev3", 00:10:57.453 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:57.453 "is_configured": true, 00:10:57.453 "data_offset": 0, 00:10:57.453 "data_size": 65536 00:10:57.453 }, 00:10:57.453 { 00:10:57.453 "name": "BaseBdev4", 00:10:57.453 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:57.453 "is_configured": true, 00:10:57.453 "data_offset": 0, 00:10:57.454 "data_size": 65536 00:10:57.454 } 00:10:57.454 ] 00:10:57.454 }' 00:10:57.454 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.454 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.712 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.712 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.712 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.712 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:57.712 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.712 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:57.712 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:57.712 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.712 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.712 [2024-12-06 18:08:09.855446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.713 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.971 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.971 "name": "Existed_Raid", 00:10:57.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.971 "strip_size_kb": 64, 00:10:57.971 "state": "configuring", 00:10:57.971 "raid_level": "raid0", 00:10:57.971 "superblock": false, 00:10:57.971 "num_base_bdevs": 4, 00:10:57.971 "num_base_bdevs_discovered": 3, 00:10:57.971 "num_base_bdevs_operational": 4, 00:10:57.971 "base_bdevs_list": [ 00:10:57.971 { 00:10:57.971 "name": null, 00:10:57.972 "uuid": "ff1beb52-dc3b-4a38-9847-7692e00c995e", 00:10:57.972 "is_configured": false, 00:10:57.972 "data_offset": 0, 00:10:57.972 "data_size": 65536 00:10:57.972 }, 00:10:57.972 { 00:10:57.972 "name": "BaseBdev2", 00:10:57.972 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:57.972 "is_configured": true, 00:10:57.972 "data_offset": 0, 00:10:57.972 "data_size": 65536 00:10:57.972 }, 00:10:57.972 { 00:10:57.972 "name": "BaseBdev3", 00:10:57.972 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:57.972 "is_configured": true, 00:10:57.972 "data_offset": 0, 00:10:57.972 "data_size": 65536 00:10:57.972 }, 00:10:57.972 { 00:10:57.972 "name": "BaseBdev4", 00:10:57.972 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:57.972 "is_configured": true, 00:10:57.972 "data_offset": 0, 00:10:57.972 "data_size": 65536 00:10:57.972 } 00:10:57.972 ] 00:10:57.972 }' 00:10:57.972 18:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.972 18:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ff1beb52-dc3b-4a38-9847-7692e00c995e 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.228 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.486 [2024-12-06 18:08:10.426723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:58.486 [2024-12-06 18:08:10.426896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:58.486 [2024-12-06 18:08:10.426927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:58.486 [2024-12-06 18:08:10.427319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:58.486 [2024-12-06 18:08:10.427547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:58.486 [2024-12-06 18:08:10.427604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:58.486 [2024-12-06 18:08:10.427980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.486 NewBaseBdev 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.486 [ 00:10:58.486 { 00:10:58.486 "name": "NewBaseBdev", 00:10:58.486 "aliases": [ 00:10:58.486 "ff1beb52-dc3b-4a38-9847-7692e00c995e" 00:10:58.486 ], 00:10:58.486 "product_name": "Malloc disk", 00:10:58.486 "block_size": 512, 00:10:58.486 "num_blocks": 65536, 00:10:58.486 "uuid": "ff1beb52-dc3b-4a38-9847-7692e00c995e", 00:10:58.486 "assigned_rate_limits": { 00:10:58.486 "rw_ios_per_sec": 0, 00:10:58.486 "rw_mbytes_per_sec": 0, 00:10:58.486 "r_mbytes_per_sec": 0, 00:10:58.486 "w_mbytes_per_sec": 0 00:10:58.486 }, 00:10:58.486 "claimed": true, 00:10:58.486 "claim_type": "exclusive_write", 00:10:58.486 "zoned": false, 00:10:58.486 "supported_io_types": { 00:10:58.486 "read": true, 00:10:58.486 "write": true, 00:10:58.486 "unmap": true, 00:10:58.486 "flush": true, 00:10:58.486 "reset": true, 00:10:58.486 "nvme_admin": false, 00:10:58.486 "nvme_io": false, 00:10:58.486 "nvme_io_md": false, 00:10:58.486 "write_zeroes": true, 00:10:58.486 "zcopy": true, 00:10:58.486 "get_zone_info": false, 00:10:58.486 "zone_management": false, 00:10:58.486 "zone_append": false, 00:10:58.486 "compare": false, 00:10:58.486 "compare_and_write": false, 00:10:58.486 "abort": true, 00:10:58.486 "seek_hole": false, 00:10:58.486 "seek_data": false, 00:10:58.486 "copy": true, 00:10:58.486 "nvme_iov_md": false 00:10:58.486 }, 00:10:58.486 "memory_domains": [ 00:10:58.486 { 00:10:58.486 "dma_device_id": "system", 00:10:58.486 "dma_device_type": 1 00:10:58.486 }, 00:10:58.486 { 00:10:58.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.486 "dma_device_type": 2 00:10:58.486 } 00:10:58.486 ], 00:10:58.486 "driver_specific": {} 00:10:58.486 } 00:10:58.486 ] 00:10:58.486 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.487 "name": "Existed_Raid", 00:10:58.487 "uuid": "ef40a19c-331e-425f-87ee-715fccf967c5", 00:10:58.487 "strip_size_kb": 64, 00:10:58.487 "state": "online", 00:10:58.487 "raid_level": "raid0", 00:10:58.487 "superblock": false, 00:10:58.487 "num_base_bdevs": 4, 00:10:58.487 "num_base_bdevs_discovered": 4, 00:10:58.487 "num_base_bdevs_operational": 4, 00:10:58.487 "base_bdevs_list": [ 00:10:58.487 { 00:10:58.487 "name": "NewBaseBdev", 00:10:58.487 "uuid": "ff1beb52-dc3b-4a38-9847-7692e00c995e", 00:10:58.487 "is_configured": true, 00:10:58.487 "data_offset": 0, 00:10:58.487 "data_size": 65536 00:10:58.487 }, 00:10:58.487 { 00:10:58.487 "name": "BaseBdev2", 00:10:58.487 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:58.487 "is_configured": true, 00:10:58.487 "data_offset": 0, 00:10:58.487 "data_size": 65536 00:10:58.487 }, 00:10:58.487 { 00:10:58.487 "name": "BaseBdev3", 00:10:58.487 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:58.487 "is_configured": true, 00:10:58.487 "data_offset": 0, 00:10:58.487 "data_size": 65536 00:10:58.487 }, 00:10:58.487 { 00:10:58.487 "name": "BaseBdev4", 00:10:58.487 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:58.487 "is_configured": true, 00:10:58.487 "data_offset": 0, 00:10:58.487 "data_size": 65536 00:10:58.487 } 00:10:58.487 ] 00:10:58.487 }' 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.487 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.745 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.003 [2024-12-06 18:08:10.914510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.003 18:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.003 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.003 "name": "Existed_Raid", 00:10:59.003 "aliases": [ 00:10:59.003 "ef40a19c-331e-425f-87ee-715fccf967c5" 00:10:59.003 ], 00:10:59.003 "product_name": "Raid Volume", 00:10:59.003 "block_size": 512, 00:10:59.003 "num_blocks": 262144, 00:10:59.003 "uuid": "ef40a19c-331e-425f-87ee-715fccf967c5", 00:10:59.003 "assigned_rate_limits": { 00:10:59.003 "rw_ios_per_sec": 0, 00:10:59.003 "rw_mbytes_per_sec": 0, 00:10:59.003 "r_mbytes_per_sec": 0, 00:10:59.003 "w_mbytes_per_sec": 0 00:10:59.003 }, 00:10:59.003 "claimed": false, 00:10:59.003 "zoned": false, 00:10:59.003 "supported_io_types": { 00:10:59.003 "read": true, 00:10:59.003 "write": true, 00:10:59.003 "unmap": true, 00:10:59.003 "flush": true, 00:10:59.003 "reset": true, 00:10:59.003 "nvme_admin": false, 00:10:59.003 "nvme_io": false, 00:10:59.003 "nvme_io_md": false, 00:10:59.003 "write_zeroes": true, 00:10:59.003 "zcopy": false, 00:10:59.003 "get_zone_info": false, 00:10:59.003 "zone_management": false, 00:10:59.003 "zone_append": false, 00:10:59.003 "compare": false, 00:10:59.003 "compare_and_write": false, 00:10:59.003 "abort": false, 00:10:59.003 "seek_hole": false, 00:10:59.003 "seek_data": false, 00:10:59.003 "copy": false, 00:10:59.003 "nvme_iov_md": false 00:10:59.003 }, 00:10:59.003 "memory_domains": [ 00:10:59.003 { 00:10:59.003 "dma_device_id": "system", 00:10:59.003 "dma_device_type": 1 00:10:59.003 }, 00:10:59.004 { 00:10:59.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.004 "dma_device_type": 2 00:10:59.004 }, 00:10:59.004 { 00:10:59.004 "dma_device_id": "system", 00:10:59.004 "dma_device_type": 1 00:10:59.004 }, 00:10:59.004 { 00:10:59.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.004 "dma_device_type": 2 00:10:59.004 }, 00:10:59.004 { 00:10:59.004 "dma_device_id": "system", 00:10:59.004 "dma_device_type": 1 00:10:59.004 }, 00:10:59.004 { 00:10:59.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.004 "dma_device_type": 2 00:10:59.004 }, 00:10:59.004 { 00:10:59.004 "dma_device_id": "system", 00:10:59.004 "dma_device_type": 1 00:10:59.004 }, 00:10:59.004 { 00:10:59.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.004 "dma_device_type": 2 00:10:59.004 } 00:10:59.004 ], 00:10:59.004 "driver_specific": { 00:10:59.004 "raid": { 00:10:59.004 "uuid": "ef40a19c-331e-425f-87ee-715fccf967c5", 00:10:59.004 "strip_size_kb": 64, 00:10:59.004 "state": "online", 00:10:59.004 "raid_level": "raid0", 00:10:59.004 "superblock": false, 00:10:59.004 "num_base_bdevs": 4, 00:10:59.004 "num_base_bdevs_discovered": 4, 00:10:59.004 "num_base_bdevs_operational": 4, 00:10:59.004 "base_bdevs_list": [ 00:10:59.004 { 00:10:59.004 "name": "NewBaseBdev", 00:10:59.004 "uuid": "ff1beb52-dc3b-4a38-9847-7692e00c995e", 00:10:59.004 "is_configured": true, 00:10:59.004 "data_offset": 0, 00:10:59.004 "data_size": 65536 00:10:59.004 }, 00:10:59.004 { 00:10:59.004 "name": "BaseBdev2", 00:10:59.004 "uuid": "ddc02281-f885-4955-a318-e5805b54fa75", 00:10:59.004 "is_configured": true, 00:10:59.004 "data_offset": 0, 00:10:59.004 "data_size": 65536 00:10:59.004 }, 00:10:59.004 { 00:10:59.004 "name": "BaseBdev3", 00:10:59.004 "uuid": "bb092a6b-4960-4c35-85ae-4b5a2009cbc7", 00:10:59.004 "is_configured": true, 00:10:59.004 "data_offset": 0, 00:10:59.004 "data_size": 65536 00:10:59.004 }, 00:10:59.004 { 00:10:59.004 "name": "BaseBdev4", 00:10:59.004 "uuid": "53f618d5-1752-44fa-8331-267b7863b1cf", 00:10:59.004 "is_configured": true, 00:10:59.004 "data_offset": 0, 00:10:59.004 "data_size": 65536 00:10:59.004 } 00:10:59.004 ] 00:10:59.004 } 00:10:59.004 } 00:10:59.004 }' 00:10:59.004 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.004 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:59.004 BaseBdev2 00:10:59.004 BaseBdev3 00:10:59.004 BaseBdev4' 00:10:59.004 18:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.004 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.262 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.262 [2024-12-06 18:08:11.249491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.262 [2024-12-06 18:08:11.249654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.262 [2024-12-06 18:08:11.249855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.262 [2024-12-06 18:08:11.250024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.263 [2024-12-06 18:08:11.250130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69836 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69836 ']' 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69836 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69836 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69836' 00:10:59.263 killing process with pid 69836 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69836 00:10:59.263 [2024-12-06 18:08:11.292771] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.263 18:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69836 00:10:59.827 [2024-12-06 18:08:11.776682] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.266 18:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:01.267 00:11:01.267 real 0m12.183s 00:11:01.267 user 0m19.267s 00:11:01.267 sys 0m1.855s 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.267 ************************************ 00:11:01.267 END TEST raid_state_function_test 00:11:01.267 ************************************ 00:11:01.267 18:08:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:01.267 18:08:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.267 18:08:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.267 18:08:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.267 ************************************ 00:11:01.267 START TEST raid_state_function_test_sb 00:11:01.267 ************************************ 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:01.267 Process raid pid: 70515 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70515 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70515' 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70515 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70515 ']' 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.267 18:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.267 [2024-12-06 18:08:13.265981] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:11:01.267 [2024-12-06 18:08:13.266258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.526 [2024-12-06 18:08:13.447358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.526 [2024-12-06 18:08:13.579573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.785 [2024-12-06 18:08:13.826320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.785 [2024-12-06 18:08:13.826375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.046 [2024-12-06 18:08:14.177417] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.046 [2024-12-06 18:08:14.177601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.046 [2024-12-06 18:08:14.177643] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.046 [2024-12-06 18:08:14.177672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.046 [2024-12-06 18:08:14.177696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.046 [2024-12-06 18:08:14.177721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.046 [2024-12-06 18:08:14.177767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.046 [2024-12-06 18:08:14.177793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.046 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.305 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.305 "name": "Existed_Raid", 00:11:02.305 "uuid": "330aa466-6f36-4bd0-9e9d-ba9eb2dc33e8", 00:11:02.305 "strip_size_kb": 64, 00:11:02.305 "state": "configuring", 00:11:02.305 "raid_level": "raid0", 00:11:02.305 "superblock": true, 00:11:02.305 "num_base_bdevs": 4, 00:11:02.305 "num_base_bdevs_discovered": 0, 00:11:02.305 "num_base_bdevs_operational": 4, 00:11:02.305 "base_bdevs_list": [ 00:11:02.305 { 00:11:02.305 "name": "BaseBdev1", 00:11:02.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.305 "is_configured": false, 00:11:02.305 "data_offset": 0, 00:11:02.305 "data_size": 0 00:11:02.305 }, 00:11:02.305 { 00:11:02.305 "name": "BaseBdev2", 00:11:02.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.305 "is_configured": false, 00:11:02.305 "data_offset": 0, 00:11:02.305 "data_size": 0 00:11:02.305 }, 00:11:02.305 { 00:11:02.305 "name": "BaseBdev3", 00:11:02.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.305 "is_configured": false, 00:11:02.305 "data_offset": 0, 00:11:02.305 "data_size": 0 00:11:02.305 }, 00:11:02.305 { 00:11:02.305 "name": "BaseBdev4", 00:11:02.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.305 "is_configured": false, 00:11:02.305 "data_offset": 0, 00:11:02.305 "data_size": 0 00:11:02.305 } 00:11:02.305 ] 00:11:02.306 }' 00:11:02.306 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.306 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.564 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.564 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.564 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.564 [2024-12-06 18:08:14.624690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.565 [2024-12-06 18:08:14.624830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.565 [2024-12-06 18:08:14.632709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.565 [2024-12-06 18:08:14.632832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.565 [2024-12-06 18:08:14.632871] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.565 [2024-12-06 18:08:14.632913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.565 [2024-12-06 18:08:14.632946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.565 [2024-12-06 18:08:14.632986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.565 [2024-12-06 18:08:14.633017] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.565 [2024-12-06 18:08:14.633056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.565 [2024-12-06 18:08:14.685492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.565 BaseBdev1 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.565 [ 00:11:02.565 { 00:11:02.565 "name": "BaseBdev1", 00:11:02.565 "aliases": [ 00:11:02.565 "647dfbf4-ffc3-4a88-91f2-49b0eaa2eb2a" 00:11:02.565 ], 00:11:02.565 "product_name": "Malloc disk", 00:11:02.565 "block_size": 512, 00:11:02.565 "num_blocks": 65536, 00:11:02.565 "uuid": "647dfbf4-ffc3-4a88-91f2-49b0eaa2eb2a", 00:11:02.565 "assigned_rate_limits": { 00:11:02.565 "rw_ios_per_sec": 0, 00:11:02.565 "rw_mbytes_per_sec": 0, 00:11:02.565 "r_mbytes_per_sec": 0, 00:11:02.565 "w_mbytes_per_sec": 0 00:11:02.565 }, 00:11:02.565 "claimed": true, 00:11:02.565 "claim_type": "exclusive_write", 00:11:02.565 "zoned": false, 00:11:02.565 "supported_io_types": { 00:11:02.565 "read": true, 00:11:02.565 "write": true, 00:11:02.565 "unmap": true, 00:11:02.565 "flush": true, 00:11:02.565 "reset": true, 00:11:02.565 "nvme_admin": false, 00:11:02.565 "nvme_io": false, 00:11:02.565 "nvme_io_md": false, 00:11:02.565 "write_zeroes": true, 00:11:02.565 "zcopy": true, 00:11:02.565 "get_zone_info": false, 00:11:02.565 "zone_management": false, 00:11:02.565 "zone_append": false, 00:11:02.565 "compare": false, 00:11:02.565 "compare_and_write": false, 00:11:02.565 "abort": true, 00:11:02.565 "seek_hole": false, 00:11:02.565 "seek_data": false, 00:11:02.565 "copy": true, 00:11:02.565 "nvme_iov_md": false 00:11:02.565 }, 00:11:02.565 "memory_domains": [ 00:11:02.565 { 00:11:02.565 "dma_device_id": "system", 00:11:02.565 "dma_device_type": 1 00:11:02.565 }, 00:11:02.565 { 00:11:02.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.565 "dma_device_type": 2 00:11:02.565 } 00:11:02.565 ], 00:11:02.565 "driver_specific": {} 00:11:02.565 } 00:11:02.565 ] 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.565 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.824 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.824 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.824 "name": "Existed_Raid", 00:11:02.824 "uuid": "02bacf56-3b07-4a3b-ba6d-02b6570683ab", 00:11:02.824 "strip_size_kb": 64, 00:11:02.824 "state": "configuring", 00:11:02.824 "raid_level": "raid0", 00:11:02.824 "superblock": true, 00:11:02.824 "num_base_bdevs": 4, 00:11:02.824 "num_base_bdevs_discovered": 1, 00:11:02.824 "num_base_bdevs_operational": 4, 00:11:02.824 "base_bdevs_list": [ 00:11:02.824 { 00:11:02.824 "name": "BaseBdev1", 00:11:02.824 "uuid": "647dfbf4-ffc3-4a88-91f2-49b0eaa2eb2a", 00:11:02.824 "is_configured": true, 00:11:02.824 "data_offset": 2048, 00:11:02.824 "data_size": 63488 00:11:02.824 }, 00:11:02.824 { 00:11:02.824 "name": "BaseBdev2", 00:11:02.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.824 "is_configured": false, 00:11:02.824 "data_offset": 0, 00:11:02.824 "data_size": 0 00:11:02.824 }, 00:11:02.824 { 00:11:02.824 "name": "BaseBdev3", 00:11:02.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.824 "is_configured": false, 00:11:02.824 "data_offset": 0, 00:11:02.824 "data_size": 0 00:11:02.824 }, 00:11:02.824 { 00:11:02.824 "name": "BaseBdev4", 00:11:02.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.824 "is_configured": false, 00:11:02.824 "data_offset": 0, 00:11:02.824 "data_size": 0 00:11:02.824 } 00:11:02.824 ] 00:11:02.824 }' 00:11:02.824 18:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.824 18:08:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.083 [2024-12-06 18:08:15.136870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.083 [2024-12-06 18:08:15.137037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.083 [2024-12-06 18:08:15.144970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.083 [2024-12-06 18:08:15.147385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.083 [2024-12-06 18:08:15.147536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.083 [2024-12-06 18:08:15.147575] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.083 [2024-12-06 18:08:15.147614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.083 [2024-12-06 18:08:15.147651] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.083 [2024-12-06 18:08:15.147692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.083 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.083 "name": "Existed_Raid", 00:11:03.083 "uuid": "6760a57e-3797-4eac-a14a-77419eb5a316", 00:11:03.083 "strip_size_kb": 64, 00:11:03.083 "state": "configuring", 00:11:03.083 "raid_level": "raid0", 00:11:03.083 "superblock": true, 00:11:03.083 "num_base_bdevs": 4, 00:11:03.083 "num_base_bdevs_discovered": 1, 00:11:03.083 "num_base_bdevs_operational": 4, 00:11:03.083 "base_bdevs_list": [ 00:11:03.083 { 00:11:03.083 "name": "BaseBdev1", 00:11:03.083 "uuid": "647dfbf4-ffc3-4a88-91f2-49b0eaa2eb2a", 00:11:03.083 "is_configured": true, 00:11:03.083 "data_offset": 2048, 00:11:03.083 "data_size": 63488 00:11:03.083 }, 00:11:03.083 { 00:11:03.083 "name": "BaseBdev2", 00:11:03.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.083 "is_configured": false, 00:11:03.083 "data_offset": 0, 00:11:03.083 "data_size": 0 00:11:03.083 }, 00:11:03.084 { 00:11:03.084 "name": "BaseBdev3", 00:11:03.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.084 "is_configured": false, 00:11:03.084 "data_offset": 0, 00:11:03.084 "data_size": 0 00:11:03.084 }, 00:11:03.084 { 00:11:03.084 "name": "BaseBdev4", 00:11:03.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.084 "is_configured": false, 00:11:03.084 "data_offset": 0, 00:11:03.084 "data_size": 0 00:11:03.084 } 00:11:03.084 ] 00:11:03.084 }' 00:11:03.084 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.084 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.650 [2024-12-06 18:08:15.607946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.650 BaseBdev2 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.650 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.650 [ 00:11:03.650 { 00:11:03.650 "name": "BaseBdev2", 00:11:03.650 "aliases": [ 00:11:03.650 "7eeceb2e-e16f-409f-81eb-2529e431c98e" 00:11:03.650 ], 00:11:03.650 "product_name": "Malloc disk", 00:11:03.650 "block_size": 512, 00:11:03.650 "num_blocks": 65536, 00:11:03.650 "uuid": "7eeceb2e-e16f-409f-81eb-2529e431c98e", 00:11:03.650 "assigned_rate_limits": { 00:11:03.650 "rw_ios_per_sec": 0, 00:11:03.650 "rw_mbytes_per_sec": 0, 00:11:03.650 "r_mbytes_per_sec": 0, 00:11:03.650 "w_mbytes_per_sec": 0 00:11:03.650 }, 00:11:03.650 "claimed": true, 00:11:03.650 "claim_type": "exclusive_write", 00:11:03.650 "zoned": false, 00:11:03.650 "supported_io_types": { 00:11:03.650 "read": true, 00:11:03.650 "write": true, 00:11:03.650 "unmap": true, 00:11:03.650 "flush": true, 00:11:03.650 "reset": true, 00:11:03.650 "nvme_admin": false, 00:11:03.650 "nvme_io": false, 00:11:03.650 "nvme_io_md": false, 00:11:03.650 "write_zeroes": true, 00:11:03.650 "zcopy": true, 00:11:03.650 "get_zone_info": false, 00:11:03.650 "zone_management": false, 00:11:03.650 "zone_append": false, 00:11:03.650 "compare": false, 00:11:03.650 "compare_and_write": false, 00:11:03.650 "abort": true, 00:11:03.650 "seek_hole": false, 00:11:03.650 "seek_data": false, 00:11:03.650 "copy": true, 00:11:03.650 "nvme_iov_md": false 00:11:03.650 }, 00:11:03.650 "memory_domains": [ 00:11:03.651 { 00:11:03.651 "dma_device_id": "system", 00:11:03.651 "dma_device_type": 1 00:11:03.651 }, 00:11:03.651 { 00:11:03.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.651 "dma_device_type": 2 00:11:03.651 } 00:11:03.651 ], 00:11:03.651 "driver_specific": {} 00:11:03.651 } 00:11:03.651 ] 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.651 "name": "Existed_Raid", 00:11:03.651 "uuid": "6760a57e-3797-4eac-a14a-77419eb5a316", 00:11:03.651 "strip_size_kb": 64, 00:11:03.651 "state": "configuring", 00:11:03.651 "raid_level": "raid0", 00:11:03.651 "superblock": true, 00:11:03.651 "num_base_bdevs": 4, 00:11:03.651 "num_base_bdevs_discovered": 2, 00:11:03.651 "num_base_bdevs_operational": 4, 00:11:03.651 "base_bdevs_list": [ 00:11:03.651 { 00:11:03.651 "name": "BaseBdev1", 00:11:03.651 "uuid": "647dfbf4-ffc3-4a88-91f2-49b0eaa2eb2a", 00:11:03.651 "is_configured": true, 00:11:03.651 "data_offset": 2048, 00:11:03.651 "data_size": 63488 00:11:03.651 }, 00:11:03.651 { 00:11:03.651 "name": "BaseBdev2", 00:11:03.651 "uuid": "7eeceb2e-e16f-409f-81eb-2529e431c98e", 00:11:03.651 "is_configured": true, 00:11:03.651 "data_offset": 2048, 00:11:03.651 "data_size": 63488 00:11:03.651 }, 00:11:03.651 { 00:11:03.651 "name": "BaseBdev3", 00:11:03.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.651 "is_configured": false, 00:11:03.651 "data_offset": 0, 00:11:03.651 "data_size": 0 00:11:03.651 }, 00:11:03.651 { 00:11:03.651 "name": "BaseBdev4", 00:11:03.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.651 "is_configured": false, 00:11:03.651 "data_offset": 0, 00:11:03.651 "data_size": 0 00:11:03.651 } 00:11:03.651 ] 00:11:03.651 }' 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.651 18:08:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.219 [2024-12-06 18:08:16.136552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.219 BaseBdev3 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.219 [ 00:11:04.219 { 00:11:04.219 "name": "BaseBdev3", 00:11:04.219 "aliases": [ 00:11:04.219 "5fab5e67-c541-416e-9938-c3507ebde561" 00:11:04.219 ], 00:11:04.219 "product_name": "Malloc disk", 00:11:04.219 "block_size": 512, 00:11:04.219 "num_blocks": 65536, 00:11:04.219 "uuid": "5fab5e67-c541-416e-9938-c3507ebde561", 00:11:04.219 "assigned_rate_limits": { 00:11:04.219 "rw_ios_per_sec": 0, 00:11:04.219 "rw_mbytes_per_sec": 0, 00:11:04.219 "r_mbytes_per_sec": 0, 00:11:04.219 "w_mbytes_per_sec": 0 00:11:04.219 }, 00:11:04.219 "claimed": true, 00:11:04.219 "claim_type": "exclusive_write", 00:11:04.219 "zoned": false, 00:11:04.219 "supported_io_types": { 00:11:04.219 "read": true, 00:11:04.219 "write": true, 00:11:04.219 "unmap": true, 00:11:04.219 "flush": true, 00:11:04.219 "reset": true, 00:11:04.219 "nvme_admin": false, 00:11:04.219 "nvme_io": false, 00:11:04.219 "nvme_io_md": false, 00:11:04.219 "write_zeroes": true, 00:11:04.219 "zcopy": true, 00:11:04.219 "get_zone_info": false, 00:11:04.219 "zone_management": false, 00:11:04.219 "zone_append": false, 00:11:04.219 "compare": false, 00:11:04.219 "compare_and_write": false, 00:11:04.219 "abort": true, 00:11:04.219 "seek_hole": false, 00:11:04.219 "seek_data": false, 00:11:04.219 "copy": true, 00:11:04.219 "nvme_iov_md": false 00:11:04.219 }, 00:11:04.219 "memory_domains": [ 00:11:04.219 { 00:11:04.219 "dma_device_id": "system", 00:11:04.219 "dma_device_type": 1 00:11:04.219 }, 00:11:04.219 { 00:11:04.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.219 "dma_device_type": 2 00:11:04.219 } 00:11:04.219 ], 00:11:04.219 "driver_specific": {} 00:11:04.219 } 00:11:04.219 ] 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.219 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.219 "name": "Existed_Raid", 00:11:04.219 "uuid": "6760a57e-3797-4eac-a14a-77419eb5a316", 00:11:04.219 "strip_size_kb": 64, 00:11:04.219 "state": "configuring", 00:11:04.219 "raid_level": "raid0", 00:11:04.219 "superblock": true, 00:11:04.219 "num_base_bdevs": 4, 00:11:04.219 "num_base_bdevs_discovered": 3, 00:11:04.219 "num_base_bdevs_operational": 4, 00:11:04.219 "base_bdevs_list": [ 00:11:04.219 { 00:11:04.220 "name": "BaseBdev1", 00:11:04.220 "uuid": "647dfbf4-ffc3-4a88-91f2-49b0eaa2eb2a", 00:11:04.220 "is_configured": true, 00:11:04.220 "data_offset": 2048, 00:11:04.220 "data_size": 63488 00:11:04.220 }, 00:11:04.220 { 00:11:04.220 "name": "BaseBdev2", 00:11:04.220 "uuid": "7eeceb2e-e16f-409f-81eb-2529e431c98e", 00:11:04.220 "is_configured": true, 00:11:04.220 "data_offset": 2048, 00:11:04.220 "data_size": 63488 00:11:04.220 }, 00:11:04.220 { 00:11:04.220 "name": "BaseBdev3", 00:11:04.220 "uuid": "5fab5e67-c541-416e-9938-c3507ebde561", 00:11:04.220 "is_configured": true, 00:11:04.220 "data_offset": 2048, 00:11:04.220 "data_size": 63488 00:11:04.220 }, 00:11:04.220 { 00:11:04.220 "name": "BaseBdev4", 00:11:04.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.220 "is_configured": false, 00:11:04.220 "data_offset": 0, 00:11:04.220 "data_size": 0 00:11:04.220 } 00:11:04.220 ] 00:11:04.220 }' 00:11:04.220 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.220 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.788 [2024-12-06 18:08:16.717411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.788 [2024-12-06 18:08:16.717862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.788 [2024-12-06 18:08:16.717929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:04.788 [2024-12-06 18:08:16.718299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:04.788 BaseBdev4 00:11:04.788 [2024-12-06 18:08:16.718522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.788 [2024-12-06 18:08:16.718576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:04.788 [2024-12-06 18:08:16.718813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.788 [ 00:11:04.788 { 00:11:04.788 "name": "BaseBdev4", 00:11:04.788 "aliases": [ 00:11:04.788 "09fd6de4-34ee-4e26-9e43-4ce4d8e0412b" 00:11:04.788 ], 00:11:04.788 "product_name": "Malloc disk", 00:11:04.788 "block_size": 512, 00:11:04.788 "num_blocks": 65536, 00:11:04.788 "uuid": "09fd6de4-34ee-4e26-9e43-4ce4d8e0412b", 00:11:04.788 "assigned_rate_limits": { 00:11:04.788 "rw_ios_per_sec": 0, 00:11:04.788 "rw_mbytes_per_sec": 0, 00:11:04.788 "r_mbytes_per_sec": 0, 00:11:04.788 "w_mbytes_per_sec": 0 00:11:04.788 }, 00:11:04.788 "claimed": true, 00:11:04.788 "claim_type": "exclusive_write", 00:11:04.788 "zoned": false, 00:11:04.788 "supported_io_types": { 00:11:04.788 "read": true, 00:11:04.788 "write": true, 00:11:04.788 "unmap": true, 00:11:04.788 "flush": true, 00:11:04.788 "reset": true, 00:11:04.788 "nvme_admin": false, 00:11:04.788 "nvme_io": false, 00:11:04.788 "nvme_io_md": false, 00:11:04.788 "write_zeroes": true, 00:11:04.788 "zcopy": true, 00:11:04.788 "get_zone_info": false, 00:11:04.788 "zone_management": false, 00:11:04.788 "zone_append": false, 00:11:04.788 "compare": false, 00:11:04.788 "compare_and_write": false, 00:11:04.788 "abort": true, 00:11:04.788 "seek_hole": false, 00:11:04.788 "seek_data": false, 00:11:04.788 "copy": true, 00:11:04.788 "nvme_iov_md": false 00:11:04.788 }, 00:11:04.788 "memory_domains": [ 00:11:04.788 { 00:11:04.788 "dma_device_id": "system", 00:11:04.788 "dma_device_type": 1 00:11:04.788 }, 00:11:04.788 { 00:11:04.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.788 "dma_device_type": 2 00:11:04.788 } 00:11:04.788 ], 00:11:04.788 "driver_specific": {} 00:11:04.788 } 00:11:04.788 ] 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.788 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.789 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.789 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.789 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.789 "name": "Existed_Raid", 00:11:04.789 "uuid": "6760a57e-3797-4eac-a14a-77419eb5a316", 00:11:04.789 "strip_size_kb": 64, 00:11:04.789 "state": "online", 00:11:04.789 "raid_level": "raid0", 00:11:04.789 "superblock": true, 00:11:04.789 "num_base_bdevs": 4, 00:11:04.789 "num_base_bdevs_discovered": 4, 00:11:04.789 "num_base_bdevs_operational": 4, 00:11:04.789 "base_bdevs_list": [ 00:11:04.789 { 00:11:04.789 "name": "BaseBdev1", 00:11:04.789 "uuid": "647dfbf4-ffc3-4a88-91f2-49b0eaa2eb2a", 00:11:04.789 "is_configured": true, 00:11:04.789 "data_offset": 2048, 00:11:04.789 "data_size": 63488 00:11:04.789 }, 00:11:04.789 { 00:11:04.789 "name": "BaseBdev2", 00:11:04.789 "uuid": "7eeceb2e-e16f-409f-81eb-2529e431c98e", 00:11:04.789 "is_configured": true, 00:11:04.789 "data_offset": 2048, 00:11:04.789 "data_size": 63488 00:11:04.789 }, 00:11:04.789 { 00:11:04.789 "name": "BaseBdev3", 00:11:04.789 "uuid": "5fab5e67-c541-416e-9938-c3507ebde561", 00:11:04.789 "is_configured": true, 00:11:04.789 "data_offset": 2048, 00:11:04.789 "data_size": 63488 00:11:04.789 }, 00:11:04.789 { 00:11:04.789 "name": "BaseBdev4", 00:11:04.789 "uuid": "09fd6de4-34ee-4e26-9e43-4ce4d8e0412b", 00:11:04.789 "is_configured": true, 00:11:04.789 "data_offset": 2048, 00:11:04.789 "data_size": 63488 00:11:04.789 } 00:11:04.789 ] 00:11:04.789 }' 00:11:04.789 18:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.789 18:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.367 [2024-12-06 18:08:17.225129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.367 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.367 "name": "Existed_Raid", 00:11:05.367 "aliases": [ 00:11:05.367 "6760a57e-3797-4eac-a14a-77419eb5a316" 00:11:05.367 ], 00:11:05.367 "product_name": "Raid Volume", 00:11:05.367 "block_size": 512, 00:11:05.367 "num_blocks": 253952, 00:11:05.367 "uuid": "6760a57e-3797-4eac-a14a-77419eb5a316", 00:11:05.367 "assigned_rate_limits": { 00:11:05.367 "rw_ios_per_sec": 0, 00:11:05.367 "rw_mbytes_per_sec": 0, 00:11:05.367 "r_mbytes_per_sec": 0, 00:11:05.367 "w_mbytes_per_sec": 0 00:11:05.367 }, 00:11:05.367 "claimed": false, 00:11:05.367 "zoned": false, 00:11:05.367 "supported_io_types": { 00:11:05.367 "read": true, 00:11:05.367 "write": true, 00:11:05.367 "unmap": true, 00:11:05.367 "flush": true, 00:11:05.367 "reset": true, 00:11:05.367 "nvme_admin": false, 00:11:05.367 "nvme_io": false, 00:11:05.367 "nvme_io_md": false, 00:11:05.367 "write_zeroes": true, 00:11:05.367 "zcopy": false, 00:11:05.368 "get_zone_info": false, 00:11:05.368 "zone_management": false, 00:11:05.368 "zone_append": false, 00:11:05.368 "compare": false, 00:11:05.368 "compare_and_write": false, 00:11:05.368 "abort": false, 00:11:05.368 "seek_hole": false, 00:11:05.368 "seek_data": false, 00:11:05.368 "copy": false, 00:11:05.368 "nvme_iov_md": false 00:11:05.368 }, 00:11:05.368 "memory_domains": [ 00:11:05.368 { 00:11:05.368 "dma_device_id": "system", 00:11:05.368 "dma_device_type": 1 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.368 "dma_device_type": 2 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "dma_device_id": "system", 00:11:05.368 "dma_device_type": 1 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.368 "dma_device_type": 2 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "dma_device_id": "system", 00:11:05.368 "dma_device_type": 1 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.368 "dma_device_type": 2 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "dma_device_id": "system", 00:11:05.368 "dma_device_type": 1 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.368 "dma_device_type": 2 00:11:05.368 } 00:11:05.368 ], 00:11:05.368 "driver_specific": { 00:11:05.368 "raid": { 00:11:05.368 "uuid": "6760a57e-3797-4eac-a14a-77419eb5a316", 00:11:05.368 "strip_size_kb": 64, 00:11:05.368 "state": "online", 00:11:05.368 "raid_level": "raid0", 00:11:05.368 "superblock": true, 00:11:05.368 "num_base_bdevs": 4, 00:11:05.368 "num_base_bdevs_discovered": 4, 00:11:05.368 "num_base_bdevs_operational": 4, 00:11:05.368 "base_bdevs_list": [ 00:11:05.368 { 00:11:05.368 "name": "BaseBdev1", 00:11:05.368 "uuid": "647dfbf4-ffc3-4a88-91f2-49b0eaa2eb2a", 00:11:05.368 "is_configured": true, 00:11:05.368 "data_offset": 2048, 00:11:05.368 "data_size": 63488 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "name": "BaseBdev2", 00:11:05.368 "uuid": "7eeceb2e-e16f-409f-81eb-2529e431c98e", 00:11:05.368 "is_configured": true, 00:11:05.368 "data_offset": 2048, 00:11:05.368 "data_size": 63488 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "name": "BaseBdev3", 00:11:05.368 "uuid": "5fab5e67-c541-416e-9938-c3507ebde561", 00:11:05.368 "is_configured": true, 00:11:05.368 "data_offset": 2048, 00:11:05.368 "data_size": 63488 00:11:05.368 }, 00:11:05.368 { 00:11:05.368 "name": "BaseBdev4", 00:11:05.368 "uuid": "09fd6de4-34ee-4e26-9e43-4ce4d8e0412b", 00:11:05.368 "is_configured": true, 00:11:05.368 "data_offset": 2048, 00:11:05.368 "data_size": 63488 00:11:05.368 } 00:11:05.368 ] 00:11:05.368 } 00:11:05.368 } 00:11:05.368 }' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:05.368 BaseBdev2 00:11:05.368 BaseBdev3 00:11:05.368 BaseBdev4' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.368 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.631 [2024-12-06 18:08:17.540322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.631 [2024-12-06 18:08:17.540458] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.631 [2024-12-06 18:08:17.540554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.631 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.632 "name": "Existed_Raid", 00:11:05.632 "uuid": "6760a57e-3797-4eac-a14a-77419eb5a316", 00:11:05.632 "strip_size_kb": 64, 00:11:05.632 "state": "offline", 00:11:05.632 "raid_level": "raid0", 00:11:05.632 "superblock": true, 00:11:05.632 "num_base_bdevs": 4, 00:11:05.632 "num_base_bdevs_discovered": 3, 00:11:05.632 "num_base_bdevs_operational": 3, 00:11:05.632 "base_bdevs_list": [ 00:11:05.632 { 00:11:05.632 "name": null, 00:11:05.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.632 "is_configured": false, 00:11:05.632 "data_offset": 0, 00:11:05.632 "data_size": 63488 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": "BaseBdev2", 00:11:05.632 "uuid": "7eeceb2e-e16f-409f-81eb-2529e431c98e", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 2048, 00:11:05.632 "data_size": 63488 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": "BaseBdev3", 00:11:05.632 "uuid": "5fab5e67-c541-416e-9938-c3507ebde561", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 2048, 00:11:05.632 "data_size": 63488 00:11:05.632 }, 00:11:05.632 { 00:11:05.632 "name": "BaseBdev4", 00:11:05.632 "uuid": "09fd6de4-34ee-4e26-9e43-4ce4d8e0412b", 00:11:05.632 "is_configured": true, 00:11:05.632 "data_offset": 2048, 00:11:05.632 "data_size": 63488 00:11:05.632 } 00:11:05.632 ] 00:11:05.632 }' 00:11:05.632 18:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.632 18:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.197 [2024-12-06 18:08:18.153925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.197 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.197 [2024-12-06 18:08:18.329086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.456 [2024-12-06 18:08:18.500639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:06.456 [2024-12-06 18:08:18.500782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.456 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.715 BaseBdev2 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.715 [ 00:11:06.715 { 00:11:06.715 "name": "BaseBdev2", 00:11:06.715 "aliases": [ 00:11:06.715 "386bd477-893c-4a57-8b46-3bf59b3c3ec2" 00:11:06.715 ], 00:11:06.715 "product_name": "Malloc disk", 00:11:06.715 "block_size": 512, 00:11:06.715 "num_blocks": 65536, 00:11:06.715 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:06.715 "assigned_rate_limits": { 00:11:06.715 "rw_ios_per_sec": 0, 00:11:06.715 "rw_mbytes_per_sec": 0, 00:11:06.715 "r_mbytes_per_sec": 0, 00:11:06.715 "w_mbytes_per_sec": 0 00:11:06.715 }, 00:11:06.715 "claimed": false, 00:11:06.715 "zoned": false, 00:11:06.715 "supported_io_types": { 00:11:06.715 "read": true, 00:11:06.715 "write": true, 00:11:06.715 "unmap": true, 00:11:06.715 "flush": true, 00:11:06.715 "reset": true, 00:11:06.715 "nvme_admin": false, 00:11:06.715 "nvme_io": false, 00:11:06.715 "nvme_io_md": false, 00:11:06.715 "write_zeroes": true, 00:11:06.715 "zcopy": true, 00:11:06.715 "get_zone_info": false, 00:11:06.715 "zone_management": false, 00:11:06.715 "zone_append": false, 00:11:06.715 "compare": false, 00:11:06.715 "compare_and_write": false, 00:11:06.715 "abort": true, 00:11:06.715 "seek_hole": false, 00:11:06.715 "seek_data": false, 00:11:06.715 "copy": true, 00:11:06.715 "nvme_iov_md": false 00:11:06.715 }, 00:11:06.715 "memory_domains": [ 00:11:06.715 { 00:11:06.715 "dma_device_id": "system", 00:11:06.715 "dma_device_type": 1 00:11:06.715 }, 00:11:06.715 { 00:11:06.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.715 "dma_device_type": 2 00:11:06.715 } 00:11:06.715 ], 00:11:06.715 "driver_specific": {} 00:11:06.715 } 00:11:06.715 ] 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.715 BaseBdev3 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.715 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.716 [ 00:11:06.716 { 00:11:06.716 "name": "BaseBdev3", 00:11:06.716 "aliases": [ 00:11:06.716 "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3" 00:11:06.716 ], 00:11:06.716 "product_name": "Malloc disk", 00:11:06.716 "block_size": 512, 00:11:06.716 "num_blocks": 65536, 00:11:06.716 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:06.716 "assigned_rate_limits": { 00:11:06.716 "rw_ios_per_sec": 0, 00:11:06.716 "rw_mbytes_per_sec": 0, 00:11:06.716 "r_mbytes_per_sec": 0, 00:11:06.716 "w_mbytes_per_sec": 0 00:11:06.716 }, 00:11:06.716 "claimed": false, 00:11:06.716 "zoned": false, 00:11:06.716 "supported_io_types": { 00:11:06.716 "read": true, 00:11:06.716 "write": true, 00:11:06.716 "unmap": true, 00:11:06.716 "flush": true, 00:11:06.716 "reset": true, 00:11:06.716 "nvme_admin": false, 00:11:06.716 "nvme_io": false, 00:11:06.716 "nvme_io_md": false, 00:11:06.716 "write_zeroes": true, 00:11:06.716 "zcopy": true, 00:11:06.716 "get_zone_info": false, 00:11:06.716 "zone_management": false, 00:11:06.716 "zone_append": false, 00:11:06.716 "compare": false, 00:11:06.716 "compare_and_write": false, 00:11:06.716 "abort": true, 00:11:06.716 "seek_hole": false, 00:11:06.716 "seek_data": false, 00:11:06.716 "copy": true, 00:11:06.716 "nvme_iov_md": false 00:11:06.716 }, 00:11:06.716 "memory_domains": [ 00:11:06.716 { 00:11:06.716 "dma_device_id": "system", 00:11:06.716 "dma_device_type": 1 00:11:06.716 }, 00:11:06.716 { 00:11:06.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.716 "dma_device_type": 2 00:11:06.716 } 00:11:06.716 ], 00:11:06.716 "driver_specific": {} 00:11:06.716 } 00:11:06.716 ] 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.716 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.974 BaseBdev4 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.974 [ 00:11:06.974 { 00:11:06.974 "name": "BaseBdev4", 00:11:06.974 "aliases": [ 00:11:06.974 "77f5a2f9-fede-45fd-a027-02a22d22b309" 00:11:06.974 ], 00:11:06.974 "product_name": "Malloc disk", 00:11:06.974 "block_size": 512, 00:11:06.974 "num_blocks": 65536, 00:11:06.974 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:06.974 "assigned_rate_limits": { 00:11:06.974 "rw_ios_per_sec": 0, 00:11:06.974 "rw_mbytes_per_sec": 0, 00:11:06.974 "r_mbytes_per_sec": 0, 00:11:06.974 "w_mbytes_per_sec": 0 00:11:06.974 }, 00:11:06.974 "claimed": false, 00:11:06.974 "zoned": false, 00:11:06.974 "supported_io_types": { 00:11:06.974 "read": true, 00:11:06.974 "write": true, 00:11:06.974 "unmap": true, 00:11:06.974 "flush": true, 00:11:06.974 "reset": true, 00:11:06.974 "nvme_admin": false, 00:11:06.974 "nvme_io": false, 00:11:06.974 "nvme_io_md": false, 00:11:06.974 "write_zeroes": true, 00:11:06.974 "zcopy": true, 00:11:06.974 "get_zone_info": false, 00:11:06.974 "zone_management": false, 00:11:06.974 "zone_append": false, 00:11:06.974 "compare": false, 00:11:06.974 "compare_and_write": false, 00:11:06.974 "abort": true, 00:11:06.974 "seek_hole": false, 00:11:06.974 "seek_data": false, 00:11:06.974 "copy": true, 00:11:06.974 "nvme_iov_md": false 00:11:06.974 }, 00:11:06.974 "memory_domains": [ 00:11:06.974 { 00:11:06.974 "dma_device_id": "system", 00:11:06.974 "dma_device_type": 1 00:11:06.974 }, 00:11:06.974 { 00:11:06.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.974 "dma_device_type": 2 00:11:06.974 } 00:11:06.974 ], 00:11:06.974 "driver_specific": {} 00:11:06.974 } 00:11:06.974 ] 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.974 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.975 [2024-12-06 18:08:18.928932] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.975 [2024-12-06 18:08:18.929093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.975 [2024-12-06 18:08:18.929176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.975 [2024-12-06 18:08:18.931472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.975 [2024-12-06 18:08:18.931617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.975 "name": "Existed_Raid", 00:11:06.975 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:06.975 "strip_size_kb": 64, 00:11:06.975 "state": "configuring", 00:11:06.975 "raid_level": "raid0", 00:11:06.975 "superblock": true, 00:11:06.975 "num_base_bdevs": 4, 00:11:06.975 "num_base_bdevs_discovered": 3, 00:11:06.975 "num_base_bdevs_operational": 4, 00:11:06.975 "base_bdevs_list": [ 00:11:06.975 { 00:11:06.975 "name": "BaseBdev1", 00:11:06.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.975 "is_configured": false, 00:11:06.975 "data_offset": 0, 00:11:06.975 "data_size": 0 00:11:06.975 }, 00:11:06.975 { 00:11:06.975 "name": "BaseBdev2", 00:11:06.975 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:06.975 "is_configured": true, 00:11:06.975 "data_offset": 2048, 00:11:06.975 "data_size": 63488 00:11:06.975 }, 00:11:06.975 { 00:11:06.975 "name": "BaseBdev3", 00:11:06.975 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:06.975 "is_configured": true, 00:11:06.975 "data_offset": 2048, 00:11:06.975 "data_size": 63488 00:11:06.975 }, 00:11:06.975 { 00:11:06.975 "name": "BaseBdev4", 00:11:06.975 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:06.975 "is_configured": true, 00:11:06.975 "data_offset": 2048, 00:11:06.975 "data_size": 63488 00:11:06.975 } 00:11:06.975 ] 00:11:06.975 }' 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.975 18:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.234 [2024-12-06 18:08:19.380179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.234 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.493 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.493 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.493 "name": "Existed_Raid", 00:11:07.493 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:07.493 "strip_size_kb": 64, 00:11:07.493 "state": "configuring", 00:11:07.493 "raid_level": "raid0", 00:11:07.493 "superblock": true, 00:11:07.493 "num_base_bdevs": 4, 00:11:07.493 "num_base_bdevs_discovered": 2, 00:11:07.493 "num_base_bdevs_operational": 4, 00:11:07.493 "base_bdevs_list": [ 00:11:07.493 { 00:11:07.493 "name": "BaseBdev1", 00:11:07.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.493 "is_configured": false, 00:11:07.493 "data_offset": 0, 00:11:07.493 "data_size": 0 00:11:07.493 }, 00:11:07.493 { 00:11:07.493 "name": null, 00:11:07.493 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:07.493 "is_configured": false, 00:11:07.493 "data_offset": 0, 00:11:07.493 "data_size": 63488 00:11:07.493 }, 00:11:07.493 { 00:11:07.493 "name": "BaseBdev3", 00:11:07.493 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:07.493 "is_configured": true, 00:11:07.493 "data_offset": 2048, 00:11:07.493 "data_size": 63488 00:11:07.493 }, 00:11:07.493 { 00:11:07.493 "name": "BaseBdev4", 00:11:07.493 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:07.493 "is_configured": true, 00:11:07.493 "data_offset": 2048, 00:11:07.493 "data_size": 63488 00:11:07.493 } 00:11:07.493 ] 00:11:07.493 }' 00:11:07.493 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.493 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.752 [2024-12-06 18:08:19.911787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.752 BaseBdev1 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.752 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.010 [ 00:11:08.010 { 00:11:08.010 "name": "BaseBdev1", 00:11:08.010 "aliases": [ 00:11:08.010 "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e" 00:11:08.010 ], 00:11:08.010 "product_name": "Malloc disk", 00:11:08.010 "block_size": 512, 00:11:08.010 "num_blocks": 65536, 00:11:08.010 "uuid": "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e", 00:11:08.010 "assigned_rate_limits": { 00:11:08.010 "rw_ios_per_sec": 0, 00:11:08.010 "rw_mbytes_per_sec": 0, 00:11:08.010 "r_mbytes_per_sec": 0, 00:11:08.010 "w_mbytes_per_sec": 0 00:11:08.010 }, 00:11:08.010 "claimed": true, 00:11:08.010 "claim_type": "exclusive_write", 00:11:08.010 "zoned": false, 00:11:08.010 "supported_io_types": { 00:11:08.010 "read": true, 00:11:08.010 "write": true, 00:11:08.010 "unmap": true, 00:11:08.010 "flush": true, 00:11:08.010 "reset": true, 00:11:08.010 "nvme_admin": false, 00:11:08.010 "nvme_io": false, 00:11:08.010 "nvme_io_md": false, 00:11:08.010 "write_zeroes": true, 00:11:08.010 "zcopy": true, 00:11:08.010 "get_zone_info": false, 00:11:08.010 "zone_management": false, 00:11:08.010 "zone_append": false, 00:11:08.010 "compare": false, 00:11:08.010 "compare_and_write": false, 00:11:08.010 "abort": true, 00:11:08.010 "seek_hole": false, 00:11:08.010 "seek_data": false, 00:11:08.010 "copy": true, 00:11:08.010 "nvme_iov_md": false 00:11:08.010 }, 00:11:08.010 "memory_domains": [ 00:11:08.010 { 00:11:08.010 "dma_device_id": "system", 00:11:08.010 "dma_device_type": 1 00:11:08.010 }, 00:11:08.010 { 00:11:08.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.010 "dma_device_type": 2 00:11:08.010 } 00:11:08.010 ], 00:11:08.010 "driver_specific": {} 00:11:08.010 } 00:11:08.010 ] 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.010 "name": "Existed_Raid", 00:11:08.010 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:08.010 "strip_size_kb": 64, 00:11:08.010 "state": "configuring", 00:11:08.010 "raid_level": "raid0", 00:11:08.010 "superblock": true, 00:11:08.010 "num_base_bdevs": 4, 00:11:08.010 "num_base_bdevs_discovered": 3, 00:11:08.010 "num_base_bdevs_operational": 4, 00:11:08.010 "base_bdevs_list": [ 00:11:08.010 { 00:11:08.010 "name": "BaseBdev1", 00:11:08.010 "uuid": "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e", 00:11:08.010 "is_configured": true, 00:11:08.010 "data_offset": 2048, 00:11:08.010 "data_size": 63488 00:11:08.010 }, 00:11:08.010 { 00:11:08.010 "name": null, 00:11:08.010 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:08.010 "is_configured": false, 00:11:08.010 "data_offset": 0, 00:11:08.010 "data_size": 63488 00:11:08.010 }, 00:11:08.010 { 00:11:08.010 "name": "BaseBdev3", 00:11:08.010 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:08.010 "is_configured": true, 00:11:08.010 "data_offset": 2048, 00:11:08.010 "data_size": 63488 00:11:08.010 }, 00:11:08.010 { 00:11:08.010 "name": "BaseBdev4", 00:11:08.010 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:08.010 "is_configured": true, 00:11:08.010 "data_offset": 2048, 00:11:08.010 "data_size": 63488 00:11:08.010 } 00:11:08.010 ] 00:11:08.010 }' 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.010 18:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.269 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.269 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.269 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.269 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.528 [2024-12-06 18:08:20.483034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.528 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.528 "name": "Existed_Raid", 00:11:08.528 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:08.528 "strip_size_kb": 64, 00:11:08.528 "state": "configuring", 00:11:08.528 "raid_level": "raid0", 00:11:08.528 "superblock": true, 00:11:08.528 "num_base_bdevs": 4, 00:11:08.528 "num_base_bdevs_discovered": 2, 00:11:08.528 "num_base_bdevs_operational": 4, 00:11:08.528 "base_bdevs_list": [ 00:11:08.528 { 00:11:08.528 "name": "BaseBdev1", 00:11:08.528 "uuid": "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e", 00:11:08.528 "is_configured": true, 00:11:08.528 "data_offset": 2048, 00:11:08.528 "data_size": 63488 00:11:08.528 }, 00:11:08.528 { 00:11:08.528 "name": null, 00:11:08.528 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:08.528 "is_configured": false, 00:11:08.528 "data_offset": 0, 00:11:08.528 "data_size": 63488 00:11:08.528 }, 00:11:08.528 { 00:11:08.528 "name": null, 00:11:08.528 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:08.528 "is_configured": false, 00:11:08.528 "data_offset": 0, 00:11:08.528 "data_size": 63488 00:11:08.528 }, 00:11:08.528 { 00:11:08.528 "name": "BaseBdev4", 00:11:08.528 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:08.528 "is_configured": true, 00:11:08.528 "data_offset": 2048, 00:11:08.528 "data_size": 63488 00:11:08.529 } 00:11:08.529 ] 00:11:08.529 }' 00:11:08.529 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.529 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.787 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.787 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.788 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.788 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.788 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.047 [2024-12-06 18:08:20.990162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.047 18:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.047 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.047 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.047 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.047 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.047 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.047 "name": "Existed_Raid", 00:11:09.047 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:09.047 "strip_size_kb": 64, 00:11:09.047 "state": "configuring", 00:11:09.047 "raid_level": "raid0", 00:11:09.047 "superblock": true, 00:11:09.047 "num_base_bdevs": 4, 00:11:09.047 "num_base_bdevs_discovered": 3, 00:11:09.047 "num_base_bdevs_operational": 4, 00:11:09.047 "base_bdevs_list": [ 00:11:09.047 { 00:11:09.047 "name": "BaseBdev1", 00:11:09.047 "uuid": "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e", 00:11:09.047 "is_configured": true, 00:11:09.047 "data_offset": 2048, 00:11:09.047 "data_size": 63488 00:11:09.047 }, 00:11:09.047 { 00:11:09.047 "name": null, 00:11:09.047 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:09.047 "is_configured": false, 00:11:09.047 "data_offset": 0, 00:11:09.047 "data_size": 63488 00:11:09.047 }, 00:11:09.047 { 00:11:09.047 "name": "BaseBdev3", 00:11:09.047 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:09.047 "is_configured": true, 00:11:09.047 "data_offset": 2048, 00:11:09.047 "data_size": 63488 00:11:09.047 }, 00:11:09.047 { 00:11:09.047 "name": "BaseBdev4", 00:11:09.047 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:09.047 "is_configured": true, 00:11:09.047 "data_offset": 2048, 00:11:09.047 "data_size": 63488 00:11:09.047 } 00:11:09.047 ] 00:11:09.047 }' 00:11:09.047 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.047 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.615 [2024-12-06 18:08:21.537277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.615 "name": "Existed_Raid", 00:11:09.615 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:09.615 "strip_size_kb": 64, 00:11:09.615 "state": "configuring", 00:11:09.615 "raid_level": "raid0", 00:11:09.615 "superblock": true, 00:11:09.615 "num_base_bdevs": 4, 00:11:09.615 "num_base_bdevs_discovered": 2, 00:11:09.615 "num_base_bdevs_operational": 4, 00:11:09.615 "base_bdevs_list": [ 00:11:09.615 { 00:11:09.615 "name": null, 00:11:09.615 "uuid": "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e", 00:11:09.615 "is_configured": false, 00:11:09.615 "data_offset": 0, 00:11:09.615 "data_size": 63488 00:11:09.615 }, 00:11:09.615 { 00:11:09.615 "name": null, 00:11:09.615 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:09.615 "is_configured": false, 00:11:09.615 "data_offset": 0, 00:11:09.615 "data_size": 63488 00:11:09.615 }, 00:11:09.615 { 00:11:09.615 "name": "BaseBdev3", 00:11:09.615 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:09.615 "is_configured": true, 00:11:09.615 "data_offset": 2048, 00:11:09.615 "data_size": 63488 00:11:09.615 }, 00:11:09.615 { 00:11:09.615 "name": "BaseBdev4", 00:11:09.615 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:09.615 "is_configured": true, 00:11:09.615 "data_offset": 2048, 00:11:09.615 "data_size": 63488 00:11:09.615 } 00:11:09.615 ] 00:11:09.615 }' 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.615 18:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.208 [2024-12-06 18:08:22.118251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.208 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.208 "name": "Existed_Raid", 00:11:10.208 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:10.208 "strip_size_kb": 64, 00:11:10.208 "state": "configuring", 00:11:10.208 "raid_level": "raid0", 00:11:10.208 "superblock": true, 00:11:10.208 "num_base_bdevs": 4, 00:11:10.208 "num_base_bdevs_discovered": 3, 00:11:10.208 "num_base_bdevs_operational": 4, 00:11:10.208 "base_bdevs_list": [ 00:11:10.208 { 00:11:10.208 "name": null, 00:11:10.208 "uuid": "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e", 00:11:10.208 "is_configured": false, 00:11:10.208 "data_offset": 0, 00:11:10.208 "data_size": 63488 00:11:10.208 }, 00:11:10.208 { 00:11:10.208 "name": "BaseBdev2", 00:11:10.208 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:10.208 "is_configured": true, 00:11:10.208 "data_offset": 2048, 00:11:10.208 "data_size": 63488 00:11:10.208 }, 00:11:10.208 { 00:11:10.208 "name": "BaseBdev3", 00:11:10.209 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:10.209 "is_configured": true, 00:11:10.209 "data_offset": 2048, 00:11:10.209 "data_size": 63488 00:11:10.209 }, 00:11:10.209 { 00:11:10.209 "name": "BaseBdev4", 00:11:10.209 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:10.209 "is_configured": true, 00:11:10.209 "data_offset": 2048, 00:11:10.209 "data_size": 63488 00:11:10.209 } 00:11:10.209 ] 00:11:10.209 }' 00:11:10.209 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.209 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:10.503 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 03eb89cd-d8cd-4ba8-962c-f78eb94ad56e 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.761 [2024-12-06 18:08:22.744456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:10.761 [2024-12-06 18:08:22.744787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:10.761 [2024-12-06 18:08:22.744803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:10.761 [2024-12-06 18:08:22.745137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:10.761 NewBaseBdev 00:11:10.761 [2024-12-06 18:08:22.745313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:10.761 [2024-12-06 18:08:22.745332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:10.761 [2024-12-06 18:08:22.745511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.761 [ 00:11:10.761 { 00:11:10.761 "name": "NewBaseBdev", 00:11:10.761 "aliases": [ 00:11:10.761 "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e" 00:11:10.761 ], 00:11:10.761 "product_name": "Malloc disk", 00:11:10.761 "block_size": 512, 00:11:10.761 "num_blocks": 65536, 00:11:10.761 "uuid": "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e", 00:11:10.761 "assigned_rate_limits": { 00:11:10.761 "rw_ios_per_sec": 0, 00:11:10.761 "rw_mbytes_per_sec": 0, 00:11:10.761 "r_mbytes_per_sec": 0, 00:11:10.761 "w_mbytes_per_sec": 0 00:11:10.761 }, 00:11:10.761 "claimed": true, 00:11:10.761 "claim_type": "exclusive_write", 00:11:10.761 "zoned": false, 00:11:10.761 "supported_io_types": { 00:11:10.761 "read": true, 00:11:10.761 "write": true, 00:11:10.761 "unmap": true, 00:11:10.761 "flush": true, 00:11:10.761 "reset": true, 00:11:10.761 "nvme_admin": false, 00:11:10.761 "nvme_io": false, 00:11:10.761 "nvme_io_md": false, 00:11:10.761 "write_zeroes": true, 00:11:10.761 "zcopy": true, 00:11:10.761 "get_zone_info": false, 00:11:10.761 "zone_management": false, 00:11:10.761 "zone_append": false, 00:11:10.761 "compare": false, 00:11:10.761 "compare_and_write": false, 00:11:10.761 "abort": true, 00:11:10.761 "seek_hole": false, 00:11:10.761 "seek_data": false, 00:11:10.761 "copy": true, 00:11:10.761 "nvme_iov_md": false 00:11:10.761 }, 00:11:10.761 "memory_domains": [ 00:11:10.761 { 00:11:10.761 "dma_device_id": "system", 00:11:10.761 "dma_device_type": 1 00:11:10.761 }, 00:11:10.761 { 00:11:10.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.761 "dma_device_type": 2 00:11:10.761 } 00:11:10.761 ], 00:11:10.761 "driver_specific": {} 00:11:10.761 } 00:11:10.761 ] 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.761 "name": "Existed_Raid", 00:11:10.761 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:10.761 "strip_size_kb": 64, 00:11:10.761 "state": "online", 00:11:10.761 "raid_level": "raid0", 00:11:10.761 "superblock": true, 00:11:10.761 "num_base_bdevs": 4, 00:11:10.761 "num_base_bdevs_discovered": 4, 00:11:10.761 "num_base_bdevs_operational": 4, 00:11:10.761 "base_bdevs_list": [ 00:11:10.761 { 00:11:10.761 "name": "NewBaseBdev", 00:11:10.761 "uuid": "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e", 00:11:10.761 "is_configured": true, 00:11:10.761 "data_offset": 2048, 00:11:10.761 "data_size": 63488 00:11:10.761 }, 00:11:10.761 { 00:11:10.761 "name": "BaseBdev2", 00:11:10.761 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:10.761 "is_configured": true, 00:11:10.761 "data_offset": 2048, 00:11:10.761 "data_size": 63488 00:11:10.761 }, 00:11:10.761 { 00:11:10.761 "name": "BaseBdev3", 00:11:10.761 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:10.761 "is_configured": true, 00:11:10.761 "data_offset": 2048, 00:11:10.761 "data_size": 63488 00:11:10.761 }, 00:11:10.761 { 00:11:10.761 "name": "BaseBdev4", 00:11:10.761 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:10.761 "is_configured": true, 00:11:10.761 "data_offset": 2048, 00:11:10.761 "data_size": 63488 00:11:10.761 } 00:11:10.761 ] 00:11:10.761 }' 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.761 18:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.327 [2024-12-06 18:08:23.236152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.327 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.327 "name": "Existed_Raid", 00:11:11.327 "aliases": [ 00:11:11.327 "8c9c4c85-e72f-4165-9f94-34c3eac0efa0" 00:11:11.327 ], 00:11:11.327 "product_name": "Raid Volume", 00:11:11.327 "block_size": 512, 00:11:11.327 "num_blocks": 253952, 00:11:11.327 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:11.327 "assigned_rate_limits": { 00:11:11.327 "rw_ios_per_sec": 0, 00:11:11.327 "rw_mbytes_per_sec": 0, 00:11:11.327 "r_mbytes_per_sec": 0, 00:11:11.327 "w_mbytes_per_sec": 0 00:11:11.327 }, 00:11:11.327 "claimed": false, 00:11:11.327 "zoned": false, 00:11:11.327 "supported_io_types": { 00:11:11.327 "read": true, 00:11:11.327 "write": true, 00:11:11.327 "unmap": true, 00:11:11.327 "flush": true, 00:11:11.327 "reset": true, 00:11:11.327 "nvme_admin": false, 00:11:11.327 "nvme_io": false, 00:11:11.328 "nvme_io_md": false, 00:11:11.328 "write_zeroes": true, 00:11:11.328 "zcopy": false, 00:11:11.328 "get_zone_info": false, 00:11:11.328 "zone_management": false, 00:11:11.328 "zone_append": false, 00:11:11.328 "compare": false, 00:11:11.328 "compare_and_write": false, 00:11:11.328 "abort": false, 00:11:11.328 "seek_hole": false, 00:11:11.328 "seek_data": false, 00:11:11.328 "copy": false, 00:11:11.328 "nvme_iov_md": false 00:11:11.328 }, 00:11:11.328 "memory_domains": [ 00:11:11.328 { 00:11:11.328 "dma_device_id": "system", 00:11:11.328 "dma_device_type": 1 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.328 "dma_device_type": 2 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "dma_device_id": "system", 00:11:11.328 "dma_device_type": 1 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.328 "dma_device_type": 2 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "dma_device_id": "system", 00:11:11.328 "dma_device_type": 1 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.328 "dma_device_type": 2 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "dma_device_id": "system", 00:11:11.328 "dma_device_type": 1 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.328 "dma_device_type": 2 00:11:11.328 } 00:11:11.328 ], 00:11:11.328 "driver_specific": { 00:11:11.328 "raid": { 00:11:11.328 "uuid": "8c9c4c85-e72f-4165-9f94-34c3eac0efa0", 00:11:11.328 "strip_size_kb": 64, 00:11:11.328 "state": "online", 00:11:11.328 "raid_level": "raid0", 00:11:11.328 "superblock": true, 00:11:11.328 "num_base_bdevs": 4, 00:11:11.328 "num_base_bdevs_discovered": 4, 00:11:11.328 "num_base_bdevs_operational": 4, 00:11:11.328 "base_bdevs_list": [ 00:11:11.328 { 00:11:11.328 "name": "NewBaseBdev", 00:11:11.328 "uuid": "03eb89cd-d8cd-4ba8-962c-f78eb94ad56e", 00:11:11.328 "is_configured": true, 00:11:11.328 "data_offset": 2048, 00:11:11.328 "data_size": 63488 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "name": "BaseBdev2", 00:11:11.328 "uuid": "386bd477-893c-4a57-8b46-3bf59b3c3ec2", 00:11:11.328 "is_configured": true, 00:11:11.328 "data_offset": 2048, 00:11:11.328 "data_size": 63488 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "name": "BaseBdev3", 00:11:11.328 "uuid": "b7de76fb-2a83-4521-a09a-0f9fd3f5f5d3", 00:11:11.328 "is_configured": true, 00:11:11.328 "data_offset": 2048, 00:11:11.328 "data_size": 63488 00:11:11.328 }, 00:11:11.328 { 00:11:11.328 "name": "BaseBdev4", 00:11:11.328 "uuid": "77f5a2f9-fede-45fd-a027-02a22d22b309", 00:11:11.328 "is_configured": true, 00:11:11.328 "data_offset": 2048, 00:11:11.328 "data_size": 63488 00:11:11.328 } 00:11:11.328 ] 00:11:11.328 } 00:11:11.328 } 00:11:11.328 }' 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:11.328 BaseBdev2 00:11:11.328 BaseBdev3 00:11:11.328 BaseBdev4' 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.328 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.586 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.586 [2024-12-06 18:08:23.571373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.586 [2024-12-06 18:08:23.571418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.586 [2024-12-06 18:08:23.571528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.586 [2024-12-06 18:08:23.571617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.586 [2024-12-06 18:08:23.571633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70515 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70515 ']' 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70515 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70515 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.587 killing process with pid 70515 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70515' 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70515 00:11:11.587 [2024-12-06 18:08:23.605584] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.587 18:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70515 00:11:12.151 [2024-12-06 18:08:24.088577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.532 18:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:13.532 00:11:13.532 real 0m12.274s 00:11:13.532 user 0m19.359s 00:11:13.532 sys 0m1.954s 00:11:13.532 18:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.532 18:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.532 ************************************ 00:11:13.532 END TEST raid_state_function_test_sb 00:11:13.532 ************************************ 00:11:13.532 18:08:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:13.532 18:08:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:13.532 18:08:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.532 18:08:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.532 ************************************ 00:11:13.532 START TEST raid_superblock_test 00:11:13.532 ************************************ 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:13.532 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71192 00:11:13.533 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71192 00:11:13.533 18:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71192 ']' 00:11:13.533 18:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.533 18:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.533 18:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:13.533 18:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.533 18:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.533 18:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.533 [2024-12-06 18:08:25.591481] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:11:13.533 [2024-12-06 18:08:25.591644] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71192 ] 00:11:13.790 [2024-12-06 18:08:25.774724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.790 [2024-12-06 18:08:25.910682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.048 [2024-12-06 18:08:26.163298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.048 [2024-12-06 18:08:26.163380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.615 malloc1 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.615 [2024-12-06 18:08:26.569817] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:14.615 [2024-12-06 18:08:26.569897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.615 [2024-12-06 18:08:26.569925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:14.615 [2024-12-06 18:08:26.569937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.615 [2024-12-06 18:08:26.572589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.615 [2024-12-06 18:08:26.572652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:14.615 pt1 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.615 malloc2 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.615 [2024-12-06 18:08:26.627088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:14.615 [2024-12-06 18:08:26.627162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.615 [2024-12-06 18:08:26.627204] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:14.615 [2024-12-06 18:08:26.627215] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.615 [2024-12-06 18:08:26.629765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.615 [2024-12-06 18:08:26.629814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:14.615 pt2 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.615 malloc3 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.615 [2024-12-06 18:08:26.699019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:14.615 [2024-12-06 18:08:26.699099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.615 [2024-12-06 18:08:26.699125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:14.615 [2024-12-06 18:08:26.699135] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.615 [2024-12-06 18:08:26.701603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.615 [2024-12-06 18:08:26.701645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:14.615 pt3 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:14.615 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.616 malloc4 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.616 [2024-12-06 18:08:26.752847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:14.616 [2024-12-06 18:08:26.752922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.616 [2024-12-06 18:08:26.752950] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:14.616 [2024-12-06 18:08:26.752972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.616 [2024-12-06 18:08:26.755616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.616 [2024-12-06 18:08:26.755669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:14.616 pt4 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.616 [2024-12-06 18:08:26.760895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:14.616 [2024-12-06 18:08:26.763179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:14.616 [2024-12-06 18:08:26.763319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:14.616 [2024-12-06 18:08:26.763397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:14.616 [2024-12-06 18:08:26.763638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:14.616 [2024-12-06 18:08:26.763660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:14.616 [2024-12-06 18:08:26.764013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:14.616 [2024-12-06 18:08:26.764246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:14.616 [2024-12-06 18:08:26.764270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:14.616 [2024-12-06 18:08:26.764491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.616 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.875 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.875 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.875 "name": "raid_bdev1", 00:11:14.875 "uuid": "b7459ae4-4bac-49af-b2fd-5d44995b478b", 00:11:14.875 "strip_size_kb": 64, 00:11:14.875 "state": "online", 00:11:14.875 "raid_level": "raid0", 00:11:14.875 "superblock": true, 00:11:14.875 "num_base_bdevs": 4, 00:11:14.875 "num_base_bdevs_discovered": 4, 00:11:14.875 "num_base_bdevs_operational": 4, 00:11:14.875 "base_bdevs_list": [ 00:11:14.875 { 00:11:14.875 "name": "pt1", 00:11:14.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.875 "is_configured": true, 00:11:14.875 "data_offset": 2048, 00:11:14.875 "data_size": 63488 00:11:14.875 }, 00:11:14.875 { 00:11:14.875 "name": "pt2", 00:11:14.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.875 "is_configured": true, 00:11:14.875 "data_offset": 2048, 00:11:14.875 "data_size": 63488 00:11:14.875 }, 00:11:14.875 { 00:11:14.875 "name": "pt3", 00:11:14.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.875 "is_configured": true, 00:11:14.875 "data_offset": 2048, 00:11:14.875 "data_size": 63488 00:11:14.875 }, 00:11:14.875 { 00:11:14.875 "name": "pt4", 00:11:14.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.875 "is_configured": true, 00:11:14.875 "data_offset": 2048, 00:11:14.875 "data_size": 63488 00:11:14.875 } 00:11:14.875 ] 00:11:14.875 }' 00:11:14.875 18:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.875 18:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.133 [2024-12-06 18:08:27.236605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.133 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.133 "name": "raid_bdev1", 00:11:15.133 "aliases": [ 00:11:15.133 "b7459ae4-4bac-49af-b2fd-5d44995b478b" 00:11:15.133 ], 00:11:15.133 "product_name": "Raid Volume", 00:11:15.133 "block_size": 512, 00:11:15.133 "num_blocks": 253952, 00:11:15.133 "uuid": "b7459ae4-4bac-49af-b2fd-5d44995b478b", 00:11:15.133 "assigned_rate_limits": { 00:11:15.133 "rw_ios_per_sec": 0, 00:11:15.133 "rw_mbytes_per_sec": 0, 00:11:15.133 "r_mbytes_per_sec": 0, 00:11:15.133 "w_mbytes_per_sec": 0 00:11:15.133 }, 00:11:15.133 "claimed": false, 00:11:15.133 "zoned": false, 00:11:15.133 "supported_io_types": { 00:11:15.134 "read": true, 00:11:15.134 "write": true, 00:11:15.134 "unmap": true, 00:11:15.134 "flush": true, 00:11:15.134 "reset": true, 00:11:15.134 "nvme_admin": false, 00:11:15.134 "nvme_io": false, 00:11:15.134 "nvme_io_md": false, 00:11:15.134 "write_zeroes": true, 00:11:15.134 "zcopy": false, 00:11:15.134 "get_zone_info": false, 00:11:15.134 "zone_management": false, 00:11:15.134 "zone_append": false, 00:11:15.134 "compare": false, 00:11:15.134 "compare_and_write": false, 00:11:15.134 "abort": false, 00:11:15.134 "seek_hole": false, 00:11:15.134 "seek_data": false, 00:11:15.134 "copy": false, 00:11:15.134 "nvme_iov_md": false 00:11:15.134 }, 00:11:15.134 "memory_domains": [ 00:11:15.134 { 00:11:15.134 "dma_device_id": "system", 00:11:15.134 "dma_device_type": 1 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.134 "dma_device_type": 2 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "dma_device_id": "system", 00:11:15.134 "dma_device_type": 1 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.134 "dma_device_type": 2 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "dma_device_id": "system", 00:11:15.134 "dma_device_type": 1 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.134 "dma_device_type": 2 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "dma_device_id": "system", 00:11:15.134 "dma_device_type": 1 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.134 "dma_device_type": 2 00:11:15.134 } 00:11:15.134 ], 00:11:15.134 "driver_specific": { 00:11:15.134 "raid": { 00:11:15.134 "uuid": "b7459ae4-4bac-49af-b2fd-5d44995b478b", 00:11:15.134 "strip_size_kb": 64, 00:11:15.134 "state": "online", 00:11:15.134 "raid_level": "raid0", 00:11:15.134 "superblock": true, 00:11:15.134 "num_base_bdevs": 4, 00:11:15.134 "num_base_bdevs_discovered": 4, 00:11:15.134 "num_base_bdevs_operational": 4, 00:11:15.134 "base_bdevs_list": [ 00:11:15.134 { 00:11:15.134 "name": "pt1", 00:11:15.134 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.134 "is_configured": true, 00:11:15.134 "data_offset": 2048, 00:11:15.134 "data_size": 63488 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "name": "pt2", 00:11:15.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.134 "is_configured": true, 00:11:15.134 "data_offset": 2048, 00:11:15.134 "data_size": 63488 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "name": "pt3", 00:11:15.134 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.134 "is_configured": true, 00:11:15.134 "data_offset": 2048, 00:11:15.134 "data_size": 63488 00:11:15.134 }, 00:11:15.134 { 00:11:15.134 "name": "pt4", 00:11:15.134 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.134 "is_configured": true, 00:11:15.134 "data_offset": 2048, 00:11:15.134 "data_size": 63488 00:11:15.134 } 00:11:15.134 ] 00:11:15.134 } 00:11:15.134 } 00:11:15.134 }' 00:11:15.134 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.392 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:15.392 pt2 00:11:15.392 pt3 00:11:15.392 pt4' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.393 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:15.651 [2024-12-06 18:08:27.571996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b7459ae4-4bac-49af-b2fd-5d44995b478b 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b7459ae4-4bac-49af-b2fd-5d44995b478b ']' 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.651 [2024-12-06 18:08:27.619548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.651 [2024-12-06 18:08:27.619590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.651 [2024-12-06 18:08:27.619712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.651 [2024-12-06 18:08:27.619803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.651 [2024-12-06 18:08:27.619824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:15.651 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.652 [2024-12-06 18:08:27.759441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:15.652 [2024-12-06 18:08:27.761669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:15.652 [2024-12-06 18:08:27.761738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:15.652 [2024-12-06 18:08:27.761779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:15.652 [2024-12-06 18:08:27.761845] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:15.652 [2024-12-06 18:08:27.761905] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:15.652 [2024-12-06 18:08:27.761928] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:15.652 [2024-12-06 18:08:27.761951] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:15.652 [2024-12-06 18:08:27.761966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.652 [2024-12-06 18:08:27.761984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:15.652 request: 00:11:15.652 { 00:11:15.652 "name": "raid_bdev1", 00:11:15.652 "raid_level": "raid0", 00:11:15.652 "base_bdevs": [ 00:11:15.652 "malloc1", 00:11:15.652 "malloc2", 00:11:15.652 "malloc3", 00:11:15.652 "malloc4" 00:11:15.652 ], 00:11:15.652 "strip_size_kb": 64, 00:11:15.652 "superblock": false, 00:11:15.652 "method": "bdev_raid_create", 00:11:15.652 "req_id": 1 00:11:15.652 } 00:11:15.652 Got JSON-RPC error response 00:11:15.652 response: 00:11:15.652 { 00:11:15.652 "code": -17, 00:11:15.652 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:15.652 } 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.652 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.911 [2024-12-06 18:08:27.819403] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:15.911 [2024-12-06 18:08:27.819484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.911 [2024-12-06 18:08:27.819508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:15.911 [2024-12-06 18:08:27.819520] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.911 [2024-12-06 18:08:27.822026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.911 [2024-12-06 18:08:27.822094] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:15.911 [2024-12-06 18:08:27.822209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:15.911 [2024-12-06 18:08:27.822281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:15.911 pt1 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.911 "name": "raid_bdev1", 00:11:15.911 "uuid": "b7459ae4-4bac-49af-b2fd-5d44995b478b", 00:11:15.911 "strip_size_kb": 64, 00:11:15.911 "state": "configuring", 00:11:15.911 "raid_level": "raid0", 00:11:15.911 "superblock": true, 00:11:15.911 "num_base_bdevs": 4, 00:11:15.911 "num_base_bdevs_discovered": 1, 00:11:15.911 "num_base_bdevs_operational": 4, 00:11:15.911 "base_bdevs_list": [ 00:11:15.911 { 00:11:15.911 "name": "pt1", 00:11:15.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.911 "is_configured": true, 00:11:15.911 "data_offset": 2048, 00:11:15.911 "data_size": 63488 00:11:15.911 }, 00:11:15.911 { 00:11:15.911 "name": null, 00:11:15.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.911 "is_configured": false, 00:11:15.911 "data_offset": 2048, 00:11:15.911 "data_size": 63488 00:11:15.911 }, 00:11:15.911 { 00:11:15.911 "name": null, 00:11:15.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.911 "is_configured": false, 00:11:15.911 "data_offset": 2048, 00:11:15.911 "data_size": 63488 00:11:15.911 }, 00:11:15.911 { 00:11:15.911 "name": null, 00:11:15.911 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.911 "is_configured": false, 00:11:15.911 "data_offset": 2048, 00:11:15.911 "data_size": 63488 00:11:15.911 } 00:11:15.911 ] 00:11:15.911 }' 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.911 18:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.171 [2024-12-06 18:08:28.271057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:16.171 [2024-12-06 18:08:28.271170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.171 [2024-12-06 18:08:28.271203] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:16.171 [2024-12-06 18:08:28.271218] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.171 [2024-12-06 18:08:28.271755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.171 [2024-12-06 18:08:28.271786] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:16.171 [2024-12-06 18:08:28.271892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:16.171 [2024-12-06 18:08:28.271930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.171 pt2 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.171 [2024-12-06 18:08:28.279087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.171 "name": "raid_bdev1", 00:11:16.171 "uuid": "b7459ae4-4bac-49af-b2fd-5d44995b478b", 00:11:16.171 "strip_size_kb": 64, 00:11:16.171 "state": "configuring", 00:11:16.171 "raid_level": "raid0", 00:11:16.171 "superblock": true, 00:11:16.171 "num_base_bdevs": 4, 00:11:16.171 "num_base_bdevs_discovered": 1, 00:11:16.171 "num_base_bdevs_operational": 4, 00:11:16.171 "base_bdevs_list": [ 00:11:16.171 { 00:11:16.171 "name": "pt1", 00:11:16.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.171 "is_configured": true, 00:11:16.171 "data_offset": 2048, 00:11:16.171 "data_size": 63488 00:11:16.171 }, 00:11:16.171 { 00:11:16.171 "name": null, 00:11:16.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.171 "is_configured": false, 00:11:16.171 "data_offset": 0, 00:11:16.171 "data_size": 63488 00:11:16.171 }, 00:11:16.171 { 00:11:16.171 "name": null, 00:11:16.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.171 "is_configured": false, 00:11:16.171 "data_offset": 2048, 00:11:16.171 "data_size": 63488 00:11:16.171 }, 00:11:16.171 { 00:11:16.171 "name": null, 00:11:16.171 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.171 "is_configured": false, 00:11:16.171 "data_offset": 2048, 00:11:16.171 "data_size": 63488 00:11:16.171 } 00:11:16.171 ] 00:11:16.171 }' 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.171 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.738 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:16.738 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:16.738 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:16.738 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.738 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.739 [2024-12-06 18:08:28.710332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:16.739 [2024-12-06 18:08:28.710428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.739 [2024-12-06 18:08:28.710452] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:16.739 [2024-12-06 18:08:28.710464] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.739 [2024-12-06 18:08:28.711002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.739 [2024-12-06 18:08:28.711024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:16.739 [2024-12-06 18:08:28.711162] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:16.739 [2024-12-06 18:08:28.711190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.739 pt2 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.739 [2024-12-06 18:08:28.718305] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:16.739 [2024-12-06 18:08:28.718375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.739 [2024-12-06 18:08:28.718399] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:16.739 [2024-12-06 18:08:28.718409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.739 [2024-12-06 18:08:28.718924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.739 [2024-12-06 18:08:28.718952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:16.739 [2024-12-06 18:08:28.719059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:16.739 [2024-12-06 18:08:28.719122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:16.739 pt3 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.739 [2024-12-06 18:08:28.726262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:16.739 [2024-12-06 18:08:28.726348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.739 [2024-12-06 18:08:28.726371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:16.739 [2024-12-06 18:08:28.726381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.739 [2024-12-06 18:08:28.726925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.739 [2024-12-06 18:08:28.726953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:16.739 [2024-12-06 18:08:28.727081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:16.739 [2024-12-06 18:08:28.727114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:16.739 [2024-12-06 18:08:28.727306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.739 [2024-12-06 18:08:28.727317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:16.739 [2024-12-06 18:08:28.727611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:16.739 [2024-12-06 18:08:28.727828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.739 [2024-12-06 18:08:28.727855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:16.739 [2024-12-06 18:08:28.728020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.739 pt4 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.739 "name": "raid_bdev1", 00:11:16.739 "uuid": "b7459ae4-4bac-49af-b2fd-5d44995b478b", 00:11:16.739 "strip_size_kb": 64, 00:11:16.739 "state": "online", 00:11:16.739 "raid_level": "raid0", 00:11:16.739 "superblock": true, 00:11:16.739 "num_base_bdevs": 4, 00:11:16.739 "num_base_bdevs_discovered": 4, 00:11:16.739 "num_base_bdevs_operational": 4, 00:11:16.739 "base_bdevs_list": [ 00:11:16.739 { 00:11:16.739 "name": "pt1", 00:11:16.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.739 "is_configured": true, 00:11:16.739 "data_offset": 2048, 00:11:16.739 "data_size": 63488 00:11:16.739 }, 00:11:16.739 { 00:11:16.739 "name": "pt2", 00:11:16.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.739 "is_configured": true, 00:11:16.739 "data_offset": 2048, 00:11:16.739 "data_size": 63488 00:11:16.739 }, 00:11:16.739 { 00:11:16.739 "name": "pt3", 00:11:16.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.739 "is_configured": true, 00:11:16.739 "data_offset": 2048, 00:11:16.739 "data_size": 63488 00:11:16.739 }, 00:11:16.739 { 00:11:16.739 "name": "pt4", 00:11:16.739 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.739 "is_configured": true, 00:11:16.739 "data_offset": 2048, 00:11:16.739 "data_size": 63488 00:11:16.739 } 00:11:16.739 ] 00:11:16.739 }' 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.739 18:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.307 [2024-12-06 18:08:29.209889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.307 "name": "raid_bdev1", 00:11:17.307 "aliases": [ 00:11:17.307 "b7459ae4-4bac-49af-b2fd-5d44995b478b" 00:11:17.307 ], 00:11:17.307 "product_name": "Raid Volume", 00:11:17.307 "block_size": 512, 00:11:17.307 "num_blocks": 253952, 00:11:17.307 "uuid": "b7459ae4-4bac-49af-b2fd-5d44995b478b", 00:11:17.307 "assigned_rate_limits": { 00:11:17.307 "rw_ios_per_sec": 0, 00:11:17.307 "rw_mbytes_per_sec": 0, 00:11:17.307 "r_mbytes_per_sec": 0, 00:11:17.307 "w_mbytes_per_sec": 0 00:11:17.307 }, 00:11:17.307 "claimed": false, 00:11:17.307 "zoned": false, 00:11:17.307 "supported_io_types": { 00:11:17.307 "read": true, 00:11:17.307 "write": true, 00:11:17.307 "unmap": true, 00:11:17.307 "flush": true, 00:11:17.307 "reset": true, 00:11:17.307 "nvme_admin": false, 00:11:17.307 "nvme_io": false, 00:11:17.307 "nvme_io_md": false, 00:11:17.307 "write_zeroes": true, 00:11:17.307 "zcopy": false, 00:11:17.307 "get_zone_info": false, 00:11:17.307 "zone_management": false, 00:11:17.307 "zone_append": false, 00:11:17.307 "compare": false, 00:11:17.307 "compare_and_write": false, 00:11:17.307 "abort": false, 00:11:17.307 "seek_hole": false, 00:11:17.307 "seek_data": false, 00:11:17.307 "copy": false, 00:11:17.307 "nvme_iov_md": false 00:11:17.307 }, 00:11:17.307 "memory_domains": [ 00:11:17.307 { 00:11:17.307 "dma_device_id": "system", 00:11:17.307 "dma_device_type": 1 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.307 "dma_device_type": 2 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "dma_device_id": "system", 00:11:17.307 "dma_device_type": 1 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.307 "dma_device_type": 2 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "dma_device_id": "system", 00:11:17.307 "dma_device_type": 1 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.307 "dma_device_type": 2 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "dma_device_id": "system", 00:11:17.307 "dma_device_type": 1 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.307 "dma_device_type": 2 00:11:17.307 } 00:11:17.307 ], 00:11:17.307 "driver_specific": { 00:11:17.307 "raid": { 00:11:17.307 "uuid": "b7459ae4-4bac-49af-b2fd-5d44995b478b", 00:11:17.307 "strip_size_kb": 64, 00:11:17.307 "state": "online", 00:11:17.307 "raid_level": "raid0", 00:11:17.307 "superblock": true, 00:11:17.307 "num_base_bdevs": 4, 00:11:17.307 "num_base_bdevs_discovered": 4, 00:11:17.307 "num_base_bdevs_operational": 4, 00:11:17.307 "base_bdevs_list": [ 00:11:17.307 { 00:11:17.307 "name": "pt1", 00:11:17.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.307 "is_configured": true, 00:11:17.307 "data_offset": 2048, 00:11:17.307 "data_size": 63488 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "name": "pt2", 00:11:17.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.307 "is_configured": true, 00:11:17.307 "data_offset": 2048, 00:11:17.307 "data_size": 63488 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "name": "pt3", 00:11:17.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.307 "is_configured": true, 00:11:17.307 "data_offset": 2048, 00:11:17.307 "data_size": 63488 00:11:17.307 }, 00:11:17.307 { 00:11:17.307 "name": "pt4", 00:11:17.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.307 "is_configured": true, 00:11:17.307 "data_offset": 2048, 00:11:17.307 "data_size": 63488 00:11:17.307 } 00:11:17.307 ] 00:11:17.307 } 00:11:17.307 } 00:11:17.307 }' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:17.307 pt2 00:11:17.307 pt3 00:11:17.307 pt4' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.307 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.565 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.565 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.565 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.565 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:17.565 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.565 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.565 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.565 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.565 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:17.566 [2024-12-06 18:08:29.537436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b7459ae4-4bac-49af-b2fd-5d44995b478b '!=' b7459ae4-4bac-49af-b2fd-5d44995b478b ']' 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71192 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71192 ']' 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71192 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71192 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.566 killing process with pid 71192 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71192' 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71192 00:11:17.566 18:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71192 00:11:17.566 [2024-12-06 18:08:29.608220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.566 [2024-12-06 18:08:29.608339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.566 [2024-12-06 18:08:29.608432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.566 [2024-12-06 18:08:29.608449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:18.131 [2024-12-06 18:08:30.076935] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.507 18:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:19.507 00:11:19.507 real 0m5.932s 00:11:19.507 user 0m8.441s 00:11:19.507 sys 0m0.905s 00:11:19.507 18:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.507 18:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.507 ************************************ 00:11:19.507 END TEST raid_superblock_test 00:11:19.507 ************************************ 00:11:19.507 18:08:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:19.507 18:08:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:19.507 18:08:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.507 18:08:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.507 ************************************ 00:11:19.507 START TEST raid_read_error_test 00:11:19.507 ************************************ 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.k7aNcpShw3 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71464 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71464 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71464 ']' 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.507 18:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.507 [2024-12-06 18:08:31.604162] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:11:19.507 [2024-12-06 18:08:31.604295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71464 ] 00:11:19.766 [2024-12-06 18:08:31.783358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.766 [2024-12-06 18:08:31.920042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.025 [2024-12-06 18:08:32.160085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.025 [2024-12-06 18:08:32.160137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.592 BaseBdev1_malloc 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.592 true 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.592 [2024-12-06 18:08:32.660457] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:20.592 [2024-12-06 18:08:32.660533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.592 [2024-12-06 18:08:32.660561] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:20.592 [2024-12-06 18:08:32.660573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.592 [2024-12-06 18:08:32.663247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.592 [2024-12-06 18:08:32.663305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:20.592 BaseBdev1 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.592 BaseBdev2_malloc 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.592 true 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.592 [2024-12-06 18:08:32.721661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:20.592 [2024-12-06 18:08:32.721729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.592 [2024-12-06 18:08:32.721752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:20.592 [2024-12-06 18:08:32.721764] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.592 [2024-12-06 18:08:32.724333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.592 [2024-12-06 18:08:32.724382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:20.592 BaseBdev2 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.592 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.851 BaseBdev3_malloc 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.851 true 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.851 [2024-12-06 18:08:32.796009] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:20.851 [2024-12-06 18:08:32.796089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.851 [2024-12-06 18:08:32.796116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:20.851 [2024-12-06 18:08:32.796129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.851 [2024-12-06 18:08:32.798723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.851 [2024-12-06 18:08:32.798773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:20.851 BaseBdev3 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.851 BaseBdev4_malloc 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.851 true 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.851 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.851 [2024-12-06 18:08:32.858902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:20.851 [2024-12-06 18:08:32.858974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.851 [2024-12-06 18:08:32.859001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:20.852 [2024-12-06 18:08:32.859014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.852 [2024-12-06 18:08:32.861636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.852 [2024-12-06 18:08:32.861688] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:20.852 BaseBdev4 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.852 [2024-12-06 18:08:32.866975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.852 [2024-12-06 18:08:32.869260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.852 [2024-12-06 18:08:32.869360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.852 [2024-12-06 18:08:32.869438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.852 [2024-12-06 18:08:32.869712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:20.852 [2024-12-06 18:08:32.869744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:20.852 [2024-12-06 18:08:32.870106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:20.852 [2024-12-06 18:08:32.870341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:20.852 [2024-12-06 18:08:32.870362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:20.852 [2024-12-06 18:08:32.870587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.852 "name": "raid_bdev1", 00:11:20.852 "uuid": "373ee822-d711-4ac3-bf75-e47bea0e0576", 00:11:20.852 "strip_size_kb": 64, 00:11:20.852 "state": "online", 00:11:20.852 "raid_level": "raid0", 00:11:20.852 "superblock": true, 00:11:20.852 "num_base_bdevs": 4, 00:11:20.852 "num_base_bdevs_discovered": 4, 00:11:20.852 "num_base_bdevs_operational": 4, 00:11:20.852 "base_bdevs_list": [ 00:11:20.852 { 00:11:20.852 "name": "BaseBdev1", 00:11:20.852 "uuid": "ef363ea6-d777-5cd5-abb3-69b1374504ba", 00:11:20.852 "is_configured": true, 00:11:20.852 "data_offset": 2048, 00:11:20.852 "data_size": 63488 00:11:20.852 }, 00:11:20.852 { 00:11:20.852 "name": "BaseBdev2", 00:11:20.852 "uuid": "e8dbb1f2-8afb-5cc0-94c5-3969412e5851", 00:11:20.852 "is_configured": true, 00:11:20.852 "data_offset": 2048, 00:11:20.852 "data_size": 63488 00:11:20.852 }, 00:11:20.852 { 00:11:20.852 "name": "BaseBdev3", 00:11:20.852 "uuid": "3ea23f5a-b88c-5c55-a9a8-4bf75e73d17a", 00:11:20.852 "is_configured": true, 00:11:20.852 "data_offset": 2048, 00:11:20.852 "data_size": 63488 00:11:20.852 }, 00:11:20.852 { 00:11:20.852 "name": "BaseBdev4", 00:11:20.852 "uuid": "293d20b2-4f30-590f-b520-5947bf33a361", 00:11:20.852 "is_configured": true, 00:11:20.852 "data_offset": 2048, 00:11:20.852 "data_size": 63488 00:11:20.852 } 00:11:20.852 ] 00:11:20.852 }' 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.852 18:08:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.420 18:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:21.420 18:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:21.420 [2024-12-06 18:08:33.427570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:22.359 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.360 "name": "raid_bdev1", 00:11:22.360 "uuid": "373ee822-d711-4ac3-bf75-e47bea0e0576", 00:11:22.360 "strip_size_kb": 64, 00:11:22.360 "state": "online", 00:11:22.360 "raid_level": "raid0", 00:11:22.360 "superblock": true, 00:11:22.360 "num_base_bdevs": 4, 00:11:22.360 "num_base_bdevs_discovered": 4, 00:11:22.360 "num_base_bdevs_operational": 4, 00:11:22.360 "base_bdevs_list": [ 00:11:22.360 { 00:11:22.360 "name": "BaseBdev1", 00:11:22.360 "uuid": "ef363ea6-d777-5cd5-abb3-69b1374504ba", 00:11:22.360 "is_configured": true, 00:11:22.360 "data_offset": 2048, 00:11:22.360 "data_size": 63488 00:11:22.360 }, 00:11:22.360 { 00:11:22.360 "name": "BaseBdev2", 00:11:22.360 "uuid": "e8dbb1f2-8afb-5cc0-94c5-3969412e5851", 00:11:22.360 "is_configured": true, 00:11:22.360 "data_offset": 2048, 00:11:22.360 "data_size": 63488 00:11:22.360 }, 00:11:22.360 { 00:11:22.360 "name": "BaseBdev3", 00:11:22.360 "uuid": "3ea23f5a-b88c-5c55-a9a8-4bf75e73d17a", 00:11:22.360 "is_configured": true, 00:11:22.360 "data_offset": 2048, 00:11:22.360 "data_size": 63488 00:11:22.360 }, 00:11:22.360 { 00:11:22.360 "name": "BaseBdev4", 00:11:22.360 "uuid": "293d20b2-4f30-590f-b520-5947bf33a361", 00:11:22.360 "is_configured": true, 00:11:22.360 "data_offset": 2048, 00:11:22.360 "data_size": 63488 00:11:22.360 } 00:11:22.360 ] 00:11:22.360 }' 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.360 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.928 [2024-12-06 18:08:34.842338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.928 [2024-12-06 18:08:34.842380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.928 [2024-12-06 18:08:34.845599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.928 [2024-12-06 18:08:34.845674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.928 [2024-12-06 18:08:34.845725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.928 [2024-12-06 18:08:34.845739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:22.928 { 00:11:22.928 "results": [ 00:11:22.928 { 00:11:22.928 "job": "raid_bdev1", 00:11:22.928 "core_mask": "0x1", 00:11:22.928 "workload": "randrw", 00:11:22.928 "percentage": 50, 00:11:22.928 "status": "finished", 00:11:22.928 "queue_depth": 1, 00:11:22.928 "io_size": 131072, 00:11:22.928 "runtime": 1.415257, 00:11:22.928 "iops": 12471.939725435028, 00:11:22.928 "mibps": 1558.9924656793785, 00:11:22.928 "io_failed": 1, 00:11:22.928 "io_timeout": 0, 00:11:22.928 "avg_latency_us": 111.17586250231304, 00:11:22.928 "min_latency_us": 32.866375545851525, 00:11:22.928 "max_latency_us": 1931.7379912663755 00:11:22.928 } 00:11:22.928 ], 00:11:22.928 "core_count": 1 00:11:22.928 } 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71464 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71464 ']' 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71464 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71464 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.928 killing process with pid 71464 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71464' 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71464 00:11:22.928 [2024-12-06 18:08:34.889653] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.928 18:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71464 00:11:23.207 [2024-12-06 18:08:35.285430] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.k7aNcpShw3 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:24.604 00:11:24.604 real 0m5.229s 00:11:24.604 user 0m6.254s 00:11:24.604 sys 0m0.605s 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.604 18:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.604 ************************************ 00:11:24.604 END TEST raid_read_error_test 00:11:24.604 ************************************ 00:11:24.863 18:08:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:24.863 18:08:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:24.863 18:08:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.863 18:08:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.863 ************************************ 00:11:24.863 START TEST raid_write_error_test 00:11:24.863 ************************************ 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N0ac8gmHcp 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71614 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71614 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71614 ']' 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.863 18:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.863 [2024-12-06 18:08:36.891204] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:11:24.863 [2024-12-06 18:08:36.891363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71614 ] 00:11:25.121 [2024-12-06 18:08:37.068948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.121 [2024-12-06 18:08:37.205125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.380 [2024-12-06 18:08:37.445226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.380 [2024-12-06 18:08:37.445306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 BaseBdev1_malloc 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 true 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 [2024-12-06 18:08:37.898800] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:25.948 [2024-12-06 18:08:37.898897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.948 [2024-12-06 18:08:37.898940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:25.948 [2024-12-06 18:08:37.898963] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.948 [2024-12-06 18:08:37.902293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.948 [2024-12-06 18:08:37.902374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:25.948 BaseBdev1 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 BaseBdev2_malloc 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 true 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 [2024-12-06 18:08:37.962334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:25.948 [2024-12-06 18:08:37.962426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.948 [2024-12-06 18:08:37.962460] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:25.948 [2024-12-06 18:08:37.962477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.948 [2024-12-06 18:08:37.965840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.948 [2024-12-06 18:08:37.965929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:25.948 BaseBdev2 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 BaseBdev3_malloc 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 true 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 [2024-12-06 18:08:38.030926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:25.948 [2024-12-06 18:08:38.031003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.948 [2024-12-06 18:08:38.031030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:25.948 [2024-12-06 18:08:38.031044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.948 [2024-12-06 18:08:38.033775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.948 [2024-12-06 18:08:38.033835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:25.948 BaseBdev3 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 BaseBdev4_malloc 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 true 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.948 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.948 [2024-12-06 18:08:38.092433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:25.948 [2024-12-06 18:08:38.092503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.948 [2024-12-06 18:08:38.092547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:25.948 [2024-12-06 18:08:38.092560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.948 [2024-12-06 18:08:38.095254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.948 [2024-12-06 18:08:38.095306] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:25.948 BaseBdev4 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.949 [2024-12-06 18:08:38.100498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.949 [2024-12-06 18:08:38.102698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.949 [2024-12-06 18:08:38.102799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.949 [2024-12-06 18:08:38.102876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.949 [2024-12-06 18:08:38.103209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:25.949 [2024-12-06 18:08:38.103248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:25.949 [2024-12-06 18:08:38.103592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:25.949 [2024-12-06 18:08:38.103818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:25.949 [2024-12-06 18:08:38.103840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:25.949 [2024-12-06 18:08:38.104081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.949 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.207 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.207 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.207 "name": "raid_bdev1", 00:11:26.207 "uuid": "ce568a95-44d9-45bd-b447-04e47c0b6281", 00:11:26.207 "strip_size_kb": 64, 00:11:26.207 "state": "online", 00:11:26.207 "raid_level": "raid0", 00:11:26.207 "superblock": true, 00:11:26.207 "num_base_bdevs": 4, 00:11:26.207 "num_base_bdevs_discovered": 4, 00:11:26.207 "num_base_bdevs_operational": 4, 00:11:26.207 "base_bdevs_list": [ 00:11:26.207 { 00:11:26.207 "name": "BaseBdev1", 00:11:26.207 "uuid": "bd951164-400a-5720-8b91-1d727511fc5a", 00:11:26.207 "is_configured": true, 00:11:26.207 "data_offset": 2048, 00:11:26.207 "data_size": 63488 00:11:26.207 }, 00:11:26.207 { 00:11:26.207 "name": "BaseBdev2", 00:11:26.207 "uuid": "5e634b85-4672-5df9-b4bf-adebc532a7a7", 00:11:26.207 "is_configured": true, 00:11:26.207 "data_offset": 2048, 00:11:26.207 "data_size": 63488 00:11:26.207 }, 00:11:26.207 { 00:11:26.207 "name": "BaseBdev3", 00:11:26.207 "uuid": "9a3dfd04-06b5-525d-a941-e9ac38b279ec", 00:11:26.207 "is_configured": true, 00:11:26.207 "data_offset": 2048, 00:11:26.207 "data_size": 63488 00:11:26.207 }, 00:11:26.207 { 00:11:26.207 "name": "BaseBdev4", 00:11:26.207 "uuid": "ea10d251-cbc1-5124-bba0-88cbc46446fa", 00:11:26.207 "is_configured": true, 00:11:26.207 "data_offset": 2048, 00:11:26.207 "data_size": 63488 00:11:26.207 } 00:11:26.207 ] 00:11:26.207 }' 00:11:26.207 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.207 18:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.465 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:26.465 18:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:26.722 [2024-12-06 18:08:38.704895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.658 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.658 "name": "raid_bdev1", 00:11:27.658 "uuid": "ce568a95-44d9-45bd-b447-04e47c0b6281", 00:11:27.658 "strip_size_kb": 64, 00:11:27.659 "state": "online", 00:11:27.659 "raid_level": "raid0", 00:11:27.659 "superblock": true, 00:11:27.659 "num_base_bdevs": 4, 00:11:27.659 "num_base_bdevs_discovered": 4, 00:11:27.659 "num_base_bdevs_operational": 4, 00:11:27.659 "base_bdevs_list": [ 00:11:27.659 { 00:11:27.659 "name": "BaseBdev1", 00:11:27.659 "uuid": "bd951164-400a-5720-8b91-1d727511fc5a", 00:11:27.659 "is_configured": true, 00:11:27.659 "data_offset": 2048, 00:11:27.659 "data_size": 63488 00:11:27.659 }, 00:11:27.659 { 00:11:27.659 "name": "BaseBdev2", 00:11:27.659 "uuid": "5e634b85-4672-5df9-b4bf-adebc532a7a7", 00:11:27.659 "is_configured": true, 00:11:27.659 "data_offset": 2048, 00:11:27.659 "data_size": 63488 00:11:27.659 }, 00:11:27.659 { 00:11:27.659 "name": "BaseBdev3", 00:11:27.659 "uuid": "9a3dfd04-06b5-525d-a941-e9ac38b279ec", 00:11:27.659 "is_configured": true, 00:11:27.659 "data_offset": 2048, 00:11:27.659 "data_size": 63488 00:11:27.659 }, 00:11:27.659 { 00:11:27.659 "name": "BaseBdev4", 00:11:27.659 "uuid": "ea10d251-cbc1-5124-bba0-88cbc46446fa", 00:11:27.659 "is_configured": true, 00:11:27.659 "data_offset": 2048, 00:11:27.659 "data_size": 63488 00:11:27.659 } 00:11:27.659 ] 00:11:27.659 }' 00:11:27.659 18:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.659 18:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.919 [2024-12-06 18:08:40.020881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.919 [2024-12-06 18:08:40.020922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.919 [2024-12-06 18:08:40.023987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.919 [2024-12-06 18:08:40.024054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.919 [2024-12-06 18:08:40.024119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.919 [2024-12-06 18:08:40.024134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:27.919 { 00:11:27.919 "results": [ 00:11:27.919 { 00:11:27.919 "job": "raid_bdev1", 00:11:27.919 "core_mask": "0x1", 00:11:27.919 "workload": "randrw", 00:11:27.919 "percentage": 50, 00:11:27.919 "status": "finished", 00:11:27.919 "queue_depth": 1, 00:11:27.919 "io_size": 131072, 00:11:27.919 "runtime": 1.316463, 00:11:27.919 "iops": 13669.203008364078, 00:11:27.919 "mibps": 1708.6503760455098, 00:11:27.919 "io_failed": 1, 00:11:27.919 "io_timeout": 0, 00:11:27.919 "avg_latency_us": 101.3576800667009, 00:11:27.919 "min_latency_us": 27.72401746724891, 00:11:27.919 "max_latency_us": 1459.5353711790392 00:11:27.919 } 00:11:27.919 ], 00:11:27.919 "core_count": 1 00:11:27.919 } 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71614 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71614 ']' 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71614 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71614 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.919 killing process with pid 71614 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71614' 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71614 00:11:27.919 [2024-12-06 18:08:40.072713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.919 18:08:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71614 00:11:28.489 [2024-12-06 18:08:40.443573] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N0ac8gmHcp 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:11:29.864 00:11:29.864 real 0m5.060s 00:11:29.864 user 0m6.016s 00:11:29.864 sys 0m0.605s 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.864 18:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.864 ************************************ 00:11:29.864 END TEST raid_write_error_test 00:11:29.864 ************************************ 00:11:29.864 18:08:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:29.864 18:08:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:29.864 18:08:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.864 18:08:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.864 18:08:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.864 ************************************ 00:11:29.864 START TEST raid_state_function_test 00:11:29.864 ************************************ 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71760 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71760' 00:11:29.864 Process raid pid: 71760 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71760 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71760 ']' 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.864 18:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.864 [2024-12-06 18:08:42.018644] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:11:29.864 [2024-12-06 18:08:42.018802] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.124 [2024-12-06 18:08:42.176670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.384 [2024-12-06 18:08:42.296441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.384 [2024-12-06 18:08:42.521231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.384 [2024-12-06 18:08:42.521278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.954 [2024-12-06 18:08:42.894323] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.954 [2024-12-06 18:08:42.894386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.954 [2024-12-06 18:08:42.894399] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.954 [2024-12-06 18:08:42.894409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.954 [2024-12-06 18:08:42.894416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:30.954 [2024-12-06 18:08:42.894442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:30.954 [2024-12-06 18:08:42.894451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:30.954 [2024-12-06 18:08:42.894461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.954 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.954 "name": "Existed_Raid", 00:11:30.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.955 "strip_size_kb": 64, 00:11:30.955 "state": "configuring", 00:11:30.955 "raid_level": "concat", 00:11:30.955 "superblock": false, 00:11:30.955 "num_base_bdevs": 4, 00:11:30.955 "num_base_bdevs_discovered": 0, 00:11:30.955 "num_base_bdevs_operational": 4, 00:11:30.955 "base_bdevs_list": [ 00:11:30.955 { 00:11:30.955 "name": "BaseBdev1", 00:11:30.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.955 "is_configured": false, 00:11:30.955 "data_offset": 0, 00:11:30.955 "data_size": 0 00:11:30.955 }, 00:11:30.955 { 00:11:30.955 "name": "BaseBdev2", 00:11:30.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.955 "is_configured": false, 00:11:30.955 "data_offset": 0, 00:11:30.955 "data_size": 0 00:11:30.955 }, 00:11:30.955 { 00:11:30.955 "name": "BaseBdev3", 00:11:30.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.955 "is_configured": false, 00:11:30.955 "data_offset": 0, 00:11:30.955 "data_size": 0 00:11:30.955 }, 00:11:30.955 { 00:11:30.955 "name": "BaseBdev4", 00:11:30.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.955 "is_configured": false, 00:11:30.955 "data_offset": 0, 00:11:30.955 "data_size": 0 00:11:30.955 } 00:11:30.955 ] 00:11:30.955 }' 00:11:30.955 18:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.955 18:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.215 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.215 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.215 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.215 [2024-12-06 18:08:43.365495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.215 [2024-12-06 18:08:43.365547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:31.215 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.215 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.215 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.215 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.215 [2024-12-06 18:08:43.377464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.215 [2024-12-06 18:08:43.377515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.215 [2024-12-06 18:08:43.377526] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.215 [2024-12-06 18:08:43.377553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.215 [2024-12-06 18:08:43.377561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.215 [2024-12-06 18:08:43.377571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.215 [2024-12-06 18:08:43.377579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.215 [2024-12-06 18:08:43.377589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.477 [2024-12-06 18:08:43.429104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.477 BaseBdev1 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.477 [ 00:11:31.477 { 00:11:31.477 "name": "BaseBdev1", 00:11:31.477 "aliases": [ 00:11:31.477 "3e77936b-7ea1-4df4-919a-42e498ded55c" 00:11:31.477 ], 00:11:31.477 "product_name": "Malloc disk", 00:11:31.477 "block_size": 512, 00:11:31.477 "num_blocks": 65536, 00:11:31.477 "uuid": "3e77936b-7ea1-4df4-919a-42e498ded55c", 00:11:31.477 "assigned_rate_limits": { 00:11:31.477 "rw_ios_per_sec": 0, 00:11:31.477 "rw_mbytes_per_sec": 0, 00:11:31.477 "r_mbytes_per_sec": 0, 00:11:31.477 "w_mbytes_per_sec": 0 00:11:31.477 }, 00:11:31.477 "claimed": true, 00:11:31.477 "claim_type": "exclusive_write", 00:11:31.477 "zoned": false, 00:11:31.477 "supported_io_types": { 00:11:31.477 "read": true, 00:11:31.477 "write": true, 00:11:31.477 "unmap": true, 00:11:31.477 "flush": true, 00:11:31.477 "reset": true, 00:11:31.477 "nvme_admin": false, 00:11:31.477 "nvme_io": false, 00:11:31.477 "nvme_io_md": false, 00:11:31.477 "write_zeroes": true, 00:11:31.477 "zcopy": true, 00:11:31.477 "get_zone_info": false, 00:11:31.477 "zone_management": false, 00:11:31.477 "zone_append": false, 00:11:31.477 "compare": false, 00:11:31.477 "compare_and_write": false, 00:11:31.477 "abort": true, 00:11:31.477 "seek_hole": false, 00:11:31.477 "seek_data": false, 00:11:31.477 "copy": true, 00:11:31.477 "nvme_iov_md": false 00:11:31.477 }, 00:11:31.477 "memory_domains": [ 00:11:31.477 { 00:11:31.477 "dma_device_id": "system", 00:11:31.477 "dma_device_type": 1 00:11:31.477 }, 00:11:31.477 { 00:11:31.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.477 "dma_device_type": 2 00:11:31.477 } 00:11:31.477 ], 00:11:31.477 "driver_specific": {} 00:11:31.477 } 00:11:31.477 ] 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.477 "name": "Existed_Raid", 00:11:31.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.477 "strip_size_kb": 64, 00:11:31.477 "state": "configuring", 00:11:31.477 "raid_level": "concat", 00:11:31.477 "superblock": false, 00:11:31.477 "num_base_bdevs": 4, 00:11:31.477 "num_base_bdevs_discovered": 1, 00:11:31.477 "num_base_bdevs_operational": 4, 00:11:31.477 "base_bdevs_list": [ 00:11:31.477 { 00:11:31.477 "name": "BaseBdev1", 00:11:31.477 "uuid": "3e77936b-7ea1-4df4-919a-42e498ded55c", 00:11:31.477 "is_configured": true, 00:11:31.477 "data_offset": 0, 00:11:31.477 "data_size": 65536 00:11:31.477 }, 00:11:31.477 { 00:11:31.477 "name": "BaseBdev2", 00:11:31.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.477 "is_configured": false, 00:11:31.477 "data_offset": 0, 00:11:31.477 "data_size": 0 00:11:31.477 }, 00:11:31.477 { 00:11:31.477 "name": "BaseBdev3", 00:11:31.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.477 "is_configured": false, 00:11:31.477 "data_offset": 0, 00:11:31.477 "data_size": 0 00:11:31.477 }, 00:11:31.477 { 00:11:31.477 "name": "BaseBdev4", 00:11:31.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.477 "is_configured": false, 00:11:31.477 "data_offset": 0, 00:11:31.477 "data_size": 0 00:11:31.477 } 00:11:31.477 ] 00:11:31.477 }' 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.477 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.046 [2024-12-06 18:08:43.936310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.046 [2024-12-06 18:08:43.936382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.046 [2024-12-06 18:08:43.948348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.046 [2024-12-06 18:08:43.950429] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.046 [2024-12-06 18:08:43.950476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.046 [2024-12-06 18:08:43.950487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.046 [2024-12-06 18:08:43.950499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.046 [2024-12-06 18:08:43.950507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.046 [2024-12-06 18:08:43.950516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.046 18:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.046 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.046 "name": "Existed_Raid", 00:11:32.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.046 "strip_size_kb": 64, 00:11:32.046 "state": "configuring", 00:11:32.046 "raid_level": "concat", 00:11:32.046 "superblock": false, 00:11:32.046 "num_base_bdevs": 4, 00:11:32.046 "num_base_bdevs_discovered": 1, 00:11:32.046 "num_base_bdevs_operational": 4, 00:11:32.046 "base_bdevs_list": [ 00:11:32.046 { 00:11:32.046 "name": "BaseBdev1", 00:11:32.046 "uuid": "3e77936b-7ea1-4df4-919a-42e498ded55c", 00:11:32.046 "is_configured": true, 00:11:32.046 "data_offset": 0, 00:11:32.047 "data_size": 65536 00:11:32.047 }, 00:11:32.047 { 00:11:32.047 "name": "BaseBdev2", 00:11:32.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.047 "is_configured": false, 00:11:32.047 "data_offset": 0, 00:11:32.047 "data_size": 0 00:11:32.047 }, 00:11:32.047 { 00:11:32.047 "name": "BaseBdev3", 00:11:32.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.047 "is_configured": false, 00:11:32.047 "data_offset": 0, 00:11:32.047 "data_size": 0 00:11:32.047 }, 00:11:32.047 { 00:11:32.047 "name": "BaseBdev4", 00:11:32.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.047 "is_configured": false, 00:11:32.047 "data_offset": 0, 00:11:32.047 "data_size": 0 00:11:32.047 } 00:11:32.047 ] 00:11:32.047 }' 00:11:32.047 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.047 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.307 [2024-12-06 18:08:44.442166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.307 BaseBdev2 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.307 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.307 [ 00:11:32.307 { 00:11:32.307 "name": "BaseBdev2", 00:11:32.307 "aliases": [ 00:11:32.307 "738cbb14-c8b1-40b4-b433-f246d4854372" 00:11:32.307 ], 00:11:32.307 "product_name": "Malloc disk", 00:11:32.307 "block_size": 512, 00:11:32.307 "num_blocks": 65536, 00:11:32.307 "uuid": "738cbb14-c8b1-40b4-b433-f246d4854372", 00:11:32.307 "assigned_rate_limits": { 00:11:32.307 "rw_ios_per_sec": 0, 00:11:32.307 "rw_mbytes_per_sec": 0, 00:11:32.307 "r_mbytes_per_sec": 0, 00:11:32.307 "w_mbytes_per_sec": 0 00:11:32.307 }, 00:11:32.307 "claimed": true, 00:11:32.307 "claim_type": "exclusive_write", 00:11:32.307 "zoned": false, 00:11:32.307 "supported_io_types": { 00:11:32.307 "read": true, 00:11:32.307 "write": true, 00:11:32.307 "unmap": true, 00:11:32.307 "flush": true, 00:11:32.307 "reset": true, 00:11:32.307 "nvme_admin": false, 00:11:32.575 "nvme_io": false, 00:11:32.575 "nvme_io_md": false, 00:11:32.575 "write_zeroes": true, 00:11:32.575 "zcopy": true, 00:11:32.575 "get_zone_info": false, 00:11:32.575 "zone_management": false, 00:11:32.575 "zone_append": false, 00:11:32.575 "compare": false, 00:11:32.575 "compare_and_write": false, 00:11:32.575 "abort": true, 00:11:32.575 "seek_hole": false, 00:11:32.575 "seek_data": false, 00:11:32.575 "copy": true, 00:11:32.575 "nvme_iov_md": false 00:11:32.575 }, 00:11:32.575 "memory_domains": [ 00:11:32.575 { 00:11:32.575 "dma_device_id": "system", 00:11:32.575 "dma_device_type": 1 00:11:32.575 }, 00:11:32.575 { 00:11:32.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.575 "dma_device_type": 2 00:11:32.575 } 00:11:32.575 ], 00:11:32.575 "driver_specific": {} 00:11:32.575 } 00:11:32.575 ] 00:11:32.575 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.575 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.575 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:32.575 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.575 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.575 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.575 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.575 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.576 "name": "Existed_Raid", 00:11:32.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.576 "strip_size_kb": 64, 00:11:32.576 "state": "configuring", 00:11:32.576 "raid_level": "concat", 00:11:32.576 "superblock": false, 00:11:32.576 "num_base_bdevs": 4, 00:11:32.576 "num_base_bdevs_discovered": 2, 00:11:32.576 "num_base_bdevs_operational": 4, 00:11:32.576 "base_bdevs_list": [ 00:11:32.576 { 00:11:32.576 "name": "BaseBdev1", 00:11:32.576 "uuid": "3e77936b-7ea1-4df4-919a-42e498ded55c", 00:11:32.576 "is_configured": true, 00:11:32.576 "data_offset": 0, 00:11:32.576 "data_size": 65536 00:11:32.576 }, 00:11:32.576 { 00:11:32.576 "name": "BaseBdev2", 00:11:32.576 "uuid": "738cbb14-c8b1-40b4-b433-f246d4854372", 00:11:32.576 "is_configured": true, 00:11:32.576 "data_offset": 0, 00:11:32.576 "data_size": 65536 00:11:32.576 }, 00:11:32.576 { 00:11:32.576 "name": "BaseBdev3", 00:11:32.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.576 "is_configured": false, 00:11:32.576 "data_offset": 0, 00:11:32.576 "data_size": 0 00:11:32.576 }, 00:11:32.576 { 00:11:32.576 "name": "BaseBdev4", 00:11:32.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.576 "is_configured": false, 00:11:32.576 "data_offset": 0, 00:11:32.576 "data_size": 0 00:11:32.576 } 00:11:32.576 ] 00:11:32.576 }' 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.576 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.847 [2024-12-06 18:08:44.963100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.847 BaseBdev3 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.847 [ 00:11:32.847 { 00:11:32.847 "name": "BaseBdev3", 00:11:32.847 "aliases": [ 00:11:32.847 "5044dbe5-34fe-455f-98b2-a6d575540c89" 00:11:32.847 ], 00:11:32.847 "product_name": "Malloc disk", 00:11:32.847 "block_size": 512, 00:11:32.847 "num_blocks": 65536, 00:11:32.847 "uuid": "5044dbe5-34fe-455f-98b2-a6d575540c89", 00:11:32.847 "assigned_rate_limits": { 00:11:32.847 "rw_ios_per_sec": 0, 00:11:32.847 "rw_mbytes_per_sec": 0, 00:11:32.847 "r_mbytes_per_sec": 0, 00:11:32.847 "w_mbytes_per_sec": 0 00:11:32.847 }, 00:11:32.847 "claimed": true, 00:11:32.847 "claim_type": "exclusive_write", 00:11:32.847 "zoned": false, 00:11:32.847 "supported_io_types": { 00:11:32.847 "read": true, 00:11:32.847 "write": true, 00:11:32.847 "unmap": true, 00:11:32.847 "flush": true, 00:11:32.847 "reset": true, 00:11:32.847 "nvme_admin": false, 00:11:32.847 "nvme_io": false, 00:11:32.847 "nvme_io_md": false, 00:11:32.847 "write_zeroes": true, 00:11:32.847 "zcopy": true, 00:11:32.847 "get_zone_info": false, 00:11:32.847 "zone_management": false, 00:11:32.847 "zone_append": false, 00:11:32.847 "compare": false, 00:11:32.847 "compare_and_write": false, 00:11:32.847 "abort": true, 00:11:32.847 "seek_hole": false, 00:11:32.847 "seek_data": false, 00:11:32.847 "copy": true, 00:11:32.847 "nvme_iov_md": false 00:11:32.847 }, 00:11:32.847 "memory_domains": [ 00:11:32.847 { 00:11:32.847 "dma_device_id": "system", 00:11:32.847 "dma_device_type": 1 00:11:32.847 }, 00:11:32.847 { 00:11:32.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.847 "dma_device_type": 2 00:11:32.847 } 00:11:32.847 ], 00:11:32.847 "driver_specific": {} 00:11:32.847 } 00:11:32.847 ] 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:32.847 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.848 18:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.848 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.106 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.106 "name": "Existed_Raid", 00:11:33.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.106 "strip_size_kb": 64, 00:11:33.106 "state": "configuring", 00:11:33.106 "raid_level": "concat", 00:11:33.106 "superblock": false, 00:11:33.106 "num_base_bdevs": 4, 00:11:33.106 "num_base_bdevs_discovered": 3, 00:11:33.106 "num_base_bdevs_operational": 4, 00:11:33.106 "base_bdevs_list": [ 00:11:33.106 { 00:11:33.106 "name": "BaseBdev1", 00:11:33.106 "uuid": "3e77936b-7ea1-4df4-919a-42e498ded55c", 00:11:33.106 "is_configured": true, 00:11:33.106 "data_offset": 0, 00:11:33.106 "data_size": 65536 00:11:33.106 }, 00:11:33.106 { 00:11:33.106 "name": "BaseBdev2", 00:11:33.106 "uuid": "738cbb14-c8b1-40b4-b433-f246d4854372", 00:11:33.106 "is_configured": true, 00:11:33.107 "data_offset": 0, 00:11:33.107 "data_size": 65536 00:11:33.107 }, 00:11:33.107 { 00:11:33.107 "name": "BaseBdev3", 00:11:33.107 "uuid": "5044dbe5-34fe-455f-98b2-a6d575540c89", 00:11:33.107 "is_configured": true, 00:11:33.107 "data_offset": 0, 00:11:33.107 "data_size": 65536 00:11:33.107 }, 00:11:33.107 { 00:11:33.107 "name": "BaseBdev4", 00:11:33.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.107 "is_configured": false, 00:11:33.107 "data_offset": 0, 00:11:33.107 "data_size": 0 00:11:33.107 } 00:11:33.107 ] 00:11:33.107 }' 00:11:33.107 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.107 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.365 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.366 [2024-12-06 18:08:45.492919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.366 [2024-12-06 18:08:45.492976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:33.366 [2024-12-06 18:08:45.492985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:33.366 [2024-12-06 18:08:45.493284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.366 [2024-12-06 18:08:45.493462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:33.366 [2024-12-06 18:08:45.493482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:33.366 [2024-12-06 18:08:45.493804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.366 BaseBdev4 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.366 [ 00:11:33.366 { 00:11:33.366 "name": "BaseBdev4", 00:11:33.366 "aliases": [ 00:11:33.366 "160454c3-cd71-490e-b720-8615b7afd7ae" 00:11:33.366 ], 00:11:33.366 "product_name": "Malloc disk", 00:11:33.366 "block_size": 512, 00:11:33.366 "num_blocks": 65536, 00:11:33.366 "uuid": "160454c3-cd71-490e-b720-8615b7afd7ae", 00:11:33.366 "assigned_rate_limits": { 00:11:33.366 "rw_ios_per_sec": 0, 00:11:33.366 "rw_mbytes_per_sec": 0, 00:11:33.366 "r_mbytes_per_sec": 0, 00:11:33.366 "w_mbytes_per_sec": 0 00:11:33.366 }, 00:11:33.366 "claimed": true, 00:11:33.366 "claim_type": "exclusive_write", 00:11:33.366 "zoned": false, 00:11:33.366 "supported_io_types": { 00:11:33.366 "read": true, 00:11:33.366 "write": true, 00:11:33.366 "unmap": true, 00:11:33.366 "flush": true, 00:11:33.366 "reset": true, 00:11:33.366 "nvme_admin": false, 00:11:33.366 "nvme_io": false, 00:11:33.366 "nvme_io_md": false, 00:11:33.366 "write_zeroes": true, 00:11:33.366 "zcopy": true, 00:11:33.366 "get_zone_info": false, 00:11:33.366 "zone_management": false, 00:11:33.366 "zone_append": false, 00:11:33.366 "compare": false, 00:11:33.366 "compare_and_write": false, 00:11:33.366 "abort": true, 00:11:33.366 "seek_hole": false, 00:11:33.366 "seek_data": false, 00:11:33.366 "copy": true, 00:11:33.366 "nvme_iov_md": false 00:11:33.366 }, 00:11:33.366 "memory_domains": [ 00:11:33.366 { 00:11:33.366 "dma_device_id": "system", 00:11:33.366 "dma_device_type": 1 00:11:33.366 }, 00:11:33.366 { 00:11:33.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.366 "dma_device_type": 2 00:11:33.366 } 00:11:33.366 ], 00:11:33.366 "driver_specific": {} 00:11:33.366 } 00:11:33.366 ] 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.366 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:33.624 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.625 "name": "Existed_Raid", 00:11:33.625 "uuid": "cab814d7-6259-45bb-9f58-b67c409e3d14", 00:11:33.625 "strip_size_kb": 64, 00:11:33.625 "state": "online", 00:11:33.625 "raid_level": "concat", 00:11:33.625 "superblock": false, 00:11:33.625 "num_base_bdevs": 4, 00:11:33.625 "num_base_bdevs_discovered": 4, 00:11:33.625 "num_base_bdevs_operational": 4, 00:11:33.625 "base_bdevs_list": [ 00:11:33.625 { 00:11:33.625 "name": "BaseBdev1", 00:11:33.625 "uuid": "3e77936b-7ea1-4df4-919a-42e498ded55c", 00:11:33.625 "is_configured": true, 00:11:33.625 "data_offset": 0, 00:11:33.625 "data_size": 65536 00:11:33.625 }, 00:11:33.625 { 00:11:33.625 "name": "BaseBdev2", 00:11:33.625 "uuid": "738cbb14-c8b1-40b4-b433-f246d4854372", 00:11:33.625 "is_configured": true, 00:11:33.625 "data_offset": 0, 00:11:33.625 "data_size": 65536 00:11:33.625 }, 00:11:33.625 { 00:11:33.625 "name": "BaseBdev3", 00:11:33.625 "uuid": "5044dbe5-34fe-455f-98b2-a6d575540c89", 00:11:33.625 "is_configured": true, 00:11:33.625 "data_offset": 0, 00:11:33.625 "data_size": 65536 00:11:33.625 }, 00:11:33.625 { 00:11:33.625 "name": "BaseBdev4", 00:11:33.625 "uuid": "160454c3-cd71-490e-b720-8615b7afd7ae", 00:11:33.625 "is_configured": true, 00:11:33.625 "data_offset": 0, 00:11:33.625 "data_size": 65536 00:11:33.625 } 00:11:33.625 ] 00:11:33.625 }' 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.625 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.884 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:33.884 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:33.884 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.884 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.884 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.884 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.884 18:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:33.884 18:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.884 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.884 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.884 [2024-12-06 18:08:46.004552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.884 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.884 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.884 "name": "Existed_Raid", 00:11:33.884 "aliases": [ 00:11:33.884 "cab814d7-6259-45bb-9f58-b67c409e3d14" 00:11:33.884 ], 00:11:33.884 "product_name": "Raid Volume", 00:11:33.884 "block_size": 512, 00:11:33.884 "num_blocks": 262144, 00:11:33.884 "uuid": "cab814d7-6259-45bb-9f58-b67c409e3d14", 00:11:33.884 "assigned_rate_limits": { 00:11:33.884 "rw_ios_per_sec": 0, 00:11:33.884 "rw_mbytes_per_sec": 0, 00:11:33.884 "r_mbytes_per_sec": 0, 00:11:33.884 "w_mbytes_per_sec": 0 00:11:33.884 }, 00:11:33.884 "claimed": false, 00:11:33.884 "zoned": false, 00:11:33.884 "supported_io_types": { 00:11:33.884 "read": true, 00:11:33.884 "write": true, 00:11:33.884 "unmap": true, 00:11:33.884 "flush": true, 00:11:33.884 "reset": true, 00:11:33.884 "nvme_admin": false, 00:11:33.884 "nvme_io": false, 00:11:33.884 "nvme_io_md": false, 00:11:33.884 "write_zeroes": true, 00:11:33.884 "zcopy": false, 00:11:33.884 "get_zone_info": false, 00:11:33.884 "zone_management": false, 00:11:33.884 "zone_append": false, 00:11:33.884 "compare": false, 00:11:33.884 "compare_and_write": false, 00:11:33.884 "abort": false, 00:11:33.884 "seek_hole": false, 00:11:33.884 "seek_data": false, 00:11:33.884 "copy": false, 00:11:33.884 "nvme_iov_md": false 00:11:33.884 }, 00:11:33.884 "memory_domains": [ 00:11:33.884 { 00:11:33.884 "dma_device_id": "system", 00:11:33.884 "dma_device_type": 1 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.884 "dma_device_type": 2 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "dma_device_id": "system", 00:11:33.884 "dma_device_type": 1 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.884 "dma_device_type": 2 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "dma_device_id": "system", 00:11:33.884 "dma_device_type": 1 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.884 "dma_device_type": 2 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "dma_device_id": "system", 00:11:33.884 "dma_device_type": 1 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.884 "dma_device_type": 2 00:11:33.884 } 00:11:33.884 ], 00:11:33.884 "driver_specific": { 00:11:33.884 "raid": { 00:11:33.884 "uuid": "cab814d7-6259-45bb-9f58-b67c409e3d14", 00:11:33.884 "strip_size_kb": 64, 00:11:33.884 "state": "online", 00:11:33.884 "raid_level": "concat", 00:11:33.884 "superblock": false, 00:11:33.884 "num_base_bdevs": 4, 00:11:33.884 "num_base_bdevs_discovered": 4, 00:11:33.884 "num_base_bdevs_operational": 4, 00:11:33.884 "base_bdevs_list": [ 00:11:33.884 { 00:11:33.884 "name": "BaseBdev1", 00:11:33.884 "uuid": "3e77936b-7ea1-4df4-919a-42e498ded55c", 00:11:33.884 "is_configured": true, 00:11:33.884 "data_offset": 0, 00:11:33.884 "data_size": 65536 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "name": "BaseBdev2", 00:11:33.884 "uuid": "738cbb14-c8b1-40b4-b433-f246d4854372", 00:11:33.884 "is_configured": true, 00:11:33.884 "data_offset": 0, 00:11:33.884 "data_size": 65536 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "name": "BaseBdev3", 00:11:33.884 "uuid": "5044dbe5-34fe-455f-98b2-a6d575540c89", 00:11:33.884 "is_configured": true, 00:11:33.884 "data_offset": 0, 00:11:33.884 "data_size": 65536 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "name": "BaseBdev4", 00:11:33.884 "uuid": "160454c3-cd71-490e-b720-8615b7afd7ae", 00:11:33.884 "is_configured": true, 00:11:33.884 "data_offset": 0, 00:11:33.884 "data_size": 65536 00:11:33.884 } 00:11:33.884 ] 00:11:33.884 } 00:11:33.884 } 00:11:33.884 }' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:34.144 BaseBdev2 00:11:34.144 BaseBdev3 00:11:34.144 BaseBdev4' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.144 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.404 [2024-12-06 18:08:46.347635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.404 [2024-12-06 18:08:46.347672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.404 [2024-12-06 18:08:46.347731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.404 "name": "Existed_Raid", 00:11:34.404 "uuid": "cab814d7-6259-45bb-9f58-b67c409e3d14", 00:11:34.404 "strip_size_kb": 64, 00:11:34.404 "state": "offline", 00:11:34.404 "raid_level": "concat", 00:11:34.404 "superblock": false, 00:11:34.404 "num_base_bdevs": 4, 00:11:34.404 "num_base_bdevs_discovered": 3, 00:11:34.404 "num_base_bdevs_operational": 3, 00:11:34.404 "base_bdevs_list": [ 00:11:34.404 { 00:11:34.404 "name": null, 00:11:34.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.404 "is_configured": false, 00:11:34.404 "data_offset": 0, 00:11:34.404 "data_size": 65536 00:11:34.404 }, 00:11:34.404 { 00:11:34.404 "name": "BaseBdev2", 00:11:34.404 "uuid": "738cbb14-c8b1-40b4-b433-f246d4854372", 00:11:34.404 "is_configured": true, 00:11:34.404 "data_offset": 0, 00:11:34.404 "data_size": 65536 00:11:34.404 }, 00:11:34.404 { 00:11:34.404 "name": "BaseBdev3", 00:11:34.404 "uuid": "5044dbe5-34fe-455f-98b2-a6d575540c89", 00:11:34.404 "is_configured": true, 00:11:34.404 "data_offset": 0, 00:11:34.404 "data_size": 65536 00:11:34.404 }, 00:11:34.404 { 00:11:34.404 "name": "BaseBdev4", 00:11:34.404 "uuid": "160454c3-cd71-490e-b720-8615b7afd7ae", 00:11:34.404 "is_configured": true, 00:11:34.404 "data_offset": 0, 00:11:34.404 "data_size": 65536 00:11:34.404 } 00:11:34.404 ] 00:11:34.404 }' 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.404 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.972 18:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.972 [2024-12-06 18:08:46.954717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.972 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.972 [2024-12-06 18:08:47.118632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.232 [2024-12-06 18:08:47.282647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:35.232 [2024-12-06 18:08:47.282707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.232 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:35.233 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.233 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.233 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.492 BaseBdev2 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.492 [ 00:11:35.492 { 00:11:35.492 "name": "BaseBdev2", 00:11:35.492 "aliases": [ 00:11:35.492 "0a43a444-8e13-4a64-86ac-e3d2697604de" 00:11:35.492 ], 00:11:35.492 "product_name": "Malloc disk", 00:11:35.492 "block_size": 512, 00:11:35.492 "num_blocks": 65536, 00:11:35.492 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:35.492 "assigned_rate_limits": { 00:11:35.492 "rw_ios_per_sec": 0, 00:11:35.492 "rw_mbytes_per_sec": 0, 00:11:35.492 "r_mbytes_per_sec": 0, 00:11:35.492 "w_mbytes_per_sec": 0 00:11:35.492 }, 00:11:35.492 "claimed": false, 00:11:35.492 "zoned": false, 00:11:35.492 "supported_io_types": { 00:11:35.492 "read": true, 00:11:35.492 "write": true, 00:11:35.492 "unmap": true, 00:11:35.492 "flush": true, 00:11:35.492 "reset": true, 00:11:35.492 "nvme_admin": false, 00:11:35.492 "nvme_io": false, 00:11:35.492 "nvme_io_md": false, 00:11:35.492 "write_zeroes": true, 00:11:35.492 "zcopy": true, 00:11:35.492 "get_zone_info": false, 00:11:35.492 "zone_management": false, 00:11:35.492 "zone_append": false, 00:11:35.492 "compare": false, 00:11:35.492 "compare_and_write": false, 00:11:35.492 "abort": true, 00:11:35.492 "seek_hole": false, 00:11:35.492 "seek_data": false, 00:11:35.492 "copy": true, 00:11:35.492 "nvme_iov_md": false 00:11:35.492 }, 00:11:35.492 "memory_domains": [ 00:11:35.492 { 00:11:35.492 "dma_device_id": "system", 00:11:35.492 "dma_device_type": 1 00:11:35.492 }, 00:11:35.492 { 00:11:35.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.492 "dma_device_type": 2 00:11:35.492 } 00:11:35.492 ], 00:11:35.492 "driver_specific": {} 00:11:35.492 } 00:11:35.492 ] 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.492 BaseBdev3 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.492 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.492 [ 00:11:35.492 { 00:11:35.492 "name": "BaseBdev3", 00:11:35.492 "aliases": [ 00:11:35.492 "f9a55834-b73c-44e8-aa49-f0ed992720e0" 00:11:35.492 ], 00:11:35.492 "product_name": "Malloc disk", 00:11:35.492 "block_size": 512, 00:11:35.492 "num_blocks": 65536, 00:11:35.492 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:35.492 "assigned_rate_limits": { 00:11:35.492 "rw_ios_per_sec": 0, 00:11:35.492 "rw_mbytes_per_sec": 0, 00:11:35.492 "r_mbytes_per_sec": 0, 00:11:35.492 "w_mbytes_per_sec": 0 00:11:35.492 }, 00:11:35.492 "claimed": false, 00:11:35.492 "zoned": false, 00:11:35.492 "supported_io_types": { 00:11:35.492 "read": true, 00:11:35.492 "write": true, 00:11:35.492 "unmap": true, 00:11:35.492 "flush": true, 00:11:35.492 "reset": true, 00:11:35.492 "nvme_admin": false, 00:11:35.492 "nvme_io": false, 00:11:35.492 "nvme_io_md": false, 00:11:35.492 "write_zeroes": true, 00:11:35.492 "zcopy": true, 00:11:35.492 "get_zone_info": false, 00:11:35.492 "zone_management": false, 00:11:35.492 "zone_append": false, 00:11:35.492 "compare": false, 00:11:35.492 "compare_and_write": false, 00:11:35.492 "abort": true, 00:11:35.493 "seek_hole": false, 00:11:35.493 "seek_data": false, 00:11:35.493 "copy": true, 00:11:35.493 "nvme_iov_md": false 00:11:35.493 }, 00:11:35.493 "memory_domains": [ 00:11:35.493 { 00:11:35.493 "dma_device_id": "system", 00:11:35.493 "dma_device_type": 1 00:11:35.493 }, 00:11:35.493 { 00:11:35.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.493 "dma_device_type": 2 00:11:35.493 } 00:11:35.493 ], 00:11:35.493 "driver_specific": {} 00:11:35.493 } 00:11:35.493 ] 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.493 BaseBdev4 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.493 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.751 [ 00:11:35.751 { 00:11:35.751 "name": "BaseBdev4", 00:11:35.751 "aliases": [ 00:11:35.751 "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f" 00:11:35.751 ], 00:11:35.751 "product_name": "Malloc disk", 00:11:35.752 "block_size": 512, 00:11:35.752 "num_blocks": 65536, 00:11:35.752 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:35.752 "assigned_rate_limits": { 00:11:35.752 "rw_ios_per_sec": 0, 00:11:35.752 "rw_mbytes_per_sec": 0, 00:11:35.752 "r_mbytes_per_sec": 0, 00:11:35.752 "w_mbytes_per_sec": 0 00:11:35.752 }, 00:11:35.752 "claimed": false, 00:11:35.752 "zoned": false, 00:11:35.752 "supported_io_types": { 00:11:35.752 "read": true, 00:11:35.752 "write": true, 00:11:35.752 "unmap": true, 00:11:35.752 "flush": true, 00:11:35.752 "reset": true, 00:11:35.752 "nvme_admin": false, 00:11:35.752 "nvme_io": false, 00:11:35.752 "nvme_io_md": false, 00:11:35.752 "write_zeroes": true, 00:11:35.752 "zcopy": true, 00:11:35.752 "get_zone_info": false, 00:11:35.752 "zone_management": false, 00:11:35.752 "zone_append": false, 00:11:35.752 "compare": false, 00:11:35.752 "compare_and_write": false, 00:11:35.752 "abort": true, 00:11:35.752 "seek_hole": false, 00:11:35.752 "seek_data": false, 00:11:35.752 "copy": true, 00:11:35.752 "nvme_iov_md": false 00:11:35.752 }, 00:11:35.752 "memory_domains": [ 00:11:35.752 { 00:11:35.752 "dma_device_id": "system", 00:11:35.752 "dma_device_type": 1 00:11:35.752 }, 00:11:35.752 { 00:11:35.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.752 "dma_device_type": 2 00:11:35.752 } 00:11:35.752 ], 00:11:35.752 "driver_specific": {} 00:11:35.752 } 00:11:35.752 ] 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.752 [2024-12-06 18:08:47.674206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.752 [2024-12-06 18:08:47.674271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.752 [2024-12-06 18:08:47.674304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.752 [2024-12-06 18:08:47.676598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.752 [2024-12-06 18:08:47.676668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.752 "name": "Existed_Raid", 00:11:35.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.752 "strip_size_kb": 64, 00:11:35.752 "state": "configuring", 00:11:35.752 "raid_level": "concat", 00:11:35.752 "superblock": false, 00:11:35.752 "num_base_bdevs": 4, 00:11:35.752 "num_base_bdevs_discovered": 3, 00:11:35.752 "num_base_bdevs_operational": 4, 00:11:35.752 "base_bdevs_list": [ 00:11:35.752 { 00:11:35.752 "name": "BaseBdev1", 00:11:35.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.752 "is_configured": false, 00:11:35.752 "data_offset": 0, 00:11:35.752 "data_size": 0 00:11:35.752 }, 00:11:35.752 { 00:11:35.752 "name": "BaseBdev2", 00:11:35.752 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:35.752 "is_configured": true, 00:11:35.752 "data_offset": 0, 00:11:35.752 "data_size": 65536 00:11:35.752 }, 00:11:35.752 { 00:11:35.752 "name": "BaseBdev3", 00:11:35.752 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:35.752 "is_configured": true, 00:11:35.752 "data_offset": 0, 00:11:35.752 "data_size": 65536 00:11:35.752 }, 00:11:35.752 { 00:11:35.752 "name": "BaseBdev4", 00:11:35.752 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:35.752 "is_configured": true, 00:11:35.752 "data_offset": 0, 00:11:35.752 "data_size": 65536 00:11:35.752 } 00:11:35.752 ] 00:11:35.752 }' 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.752 18:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.012 [2024-12-06 18:08:48.105433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.012 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.012 "name": "Existed_Raid", 00:11:36.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.012 "strip_size_kb": 64, 00:11:36.012 "state": "configuring", 00:11:36.013 "raid_level": "concat", 00:11:36.013 "superblock": false, 00:11:36.013 "num_base_bdevs": 4, 00:11:36.013 "num_base_bdevs_discovered": 2, 00:11:36.013 "num_base_bdevs_operational": 4, 00:11:36.013 "base_bdevs_list": [ 00:11:36.013 { 00:11:36.013 "name": "BaseBdev1", 00:11:36.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.013 "is_configured": false, 00:11:36.013 "data_offset": 0, 00:11:36.013 "data_size": 0 00:11:36.013 }, 00:11:36.013 { 00:11:36.013 "name": null, 00:11:36.013 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:36.013 "is_configured": false, 00:11:36.013 "data_offset": 0, 00:11:36.013 "data_size": 65536 00:11:36.013 }, 00:11:36.013 { 00:11:36.013 "name": "BaseBdev3", 00:11:36.013 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:36.013 "is_configured": true, 00:11:36.013 "data_offset": 0, 00:11:36.013 "data_size": 65536 00:11:36.013 }, 00:11:36.013 { 00:11:36.013 "name": "BaseBdev4", 00:11:36.013 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:36.013 "is_configured": true, 00:11:36.013 "data_offset": 0, 00:11:36.013 "data_size": 65536 00:11:36.013 } 00:11:36.013 ] 00:11:36.013 }' 00:11:36.013 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.013 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.580 [2024-12-06 18:08:48.628336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.580 BaseBdev1 00:11:36.580 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.581 [ 00:11:36.581 { 00:11:36.581 "name": "BaseBdev1", 00:11:36.581 "aliases": [ 00:11:36.581 "50edacf1-f98d-448e-ab6f-4f2a7f507492" 00:11:36.581 ], 00:11:36.581 "product_name": "Malloc disk", 00:11:36.581 "block_size": 512, 00:11:36.581 "num_blocks": 65536, 00:11:36.581 "uuid": "50edacf1-f98d-448e-ab6f-4f2a7f507492", 00:11:36.581 "assigned_rate_limits": { 00:11:36.581 "rw_ios_per_sec": 0, 00:11:36.581 "rw_mbytes_per_sec": 0, 00:11:36.581 "r_mbytes_per_sec": 0, 00:11:36.581 "w_mbytes_per_sec": 0 00:11:36.581 }, 00:11:36.581 "claimed": true, 00:11:36.581 "claim_type": "exclusive_write", 00:11:36.581 "zoned": false, 00:11:36.581 "supported_io_types": { 00:11:36.581 "read": true, 00:11:36.581 "write": true, 00:11:36.581 "unmap": true, 00:11:36.581 "flush": true, 00:11:36.581 "reset": true, 00:11:36.581 "nvme_admin": false, 00:11:36.581 "nvme_io": false, 00:11:36.581 "nvme_io_md": false, 00:11:36.581 "write_zeroes": true, 00:11:36.581 "zcopy": true, 00:11:36.581 "get_zone_info": false, 00:11:36.581 "zone_management": false, 00:11:36.581 "zone_append": false, 00:11:36.581 "compare": false, 00:11:36.581 "compare_and_write": false, 00:11:36.581 "abort": true, 00:11:36.581 "seek_hole": false, 00:11:36.581 "seek_data": false, 00:11:36.581 "copy": true, 00:11:36.581 "nvme_iov_md": false 00:11:36.581 }, 00:11:36.581 "memory_domains": [ 00:11:36.581 { 00:11:36.581 "dma_device_id": "system", 00:11:36.581 "dma_device_type": 1 00:11:36.581 }, 00:11:36.581 { 00:11:36.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.581 "dma_device_type": 2 00:11:36.581 } 00:11:36.581 ], 00:11:36.581 "driver_specific": {} 00:11:36.581 } 00:11:36.581 ] 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.581 "name": "Existed_Raid", 00:11:36.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.581 "strip_size_kb": 64, 00:11:36.581 "state": "configuring", 00:11:36.581 "raid_level": "concat", 00:11:36.581 "superblock": false, 00:11:36.581 "num_base_bdevs": 4, 00:11:36.581 "num_base_bdevs_discovered": 3, 00:11:36.581 "num_base_bdevs_operational": 4, 00:11:36.581 "base_bdevs_list": [ 00:11:36.581 { 00:11:36.581 "name": "BaseBdev1", 00:11:36.581 "uuid": "50edacf1-f98d-448e-ab6f-4f2a7f507492", 00:11:36.581 "is_configured": true, 00:11:36.581 "data_offset": 0, 00:11:36.581 "data_size": 65536 00:11:36.581 }, 00:11:36.581 { 00:11:36.581 "name": null, 00:11:36.581 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:36.581 "is_configured": false, 00:11:36.581 "data_offset": 0, 00:11:36.581 "data_size": 65536 00:11:36.581 }, 00:11:36.581 { 00:11:36.581 "name": "BaseBdev3", 00:11:36.581 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:36.581 "is_configured": true, 00:11:36.581 "data_offset": 0, 00:11:36.581 "data_size": 65536 00:11:36.581 }, 00:11:36.581 { 00:11:36.581 "name": "BaseBdev4", 00:11:36.581 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:36.581 "is_configured": true, 00:11:36.581 "data_offset": 0, 00:11:36.581 "data_size": 65536 00:11:36.581 } 00:11:36.581 ] 00:11:36.581 }' 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.581 18:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.148 [2024-12-06 18:08:49.167523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.148 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.148 "name": "Existed_Raid", 00:11:37.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.148 "strip_size_kb": 64, 00:11:37.148 "state": "configuring", 00:11:37.148 "raid_level": "concat", 00:11:37.148 "superblock": false, 00:11:37.148 "num_base_bdevs": 4, 00:11:37.148 "num_base_bdevs_discovered": 2, 00:11:37.148 "num_base_bdevs_operational": 4, 00:11:37.148 "base_bdevs_list": [ 00:11:37.148 { 00:11:37.148 "name": "BaseBdev1", 00:11:37.148 "uuid": "50edacf1-f98d-448e-ab6f-4f2a7f507492", 00:11:37.148 "is_configured": true, 00:11:37.148 "data_offset": 0, 00:11:37.148 "data_size": 65536 00:11:37.148 }, 00:11:37.148 { 00:11:37.148 "name": null, 00:11:37.148 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:37.148 "is_configured": false, 00:11:37.148 "data_offset": 0, 00:11:37.148 "data_size": 65536 00:11:37.148 }, 00:11:37.148 { 00:11:37.148 "name": null, 00:11:37.148 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:37.148 "is_configured": false, 00:11:37.148 "data_offset": 0, 00:11:37.148 "data_size": 65536 00:11:37.148 }, 00:11:37.148 { 00:11:37.148 "name": "BaseBdev4", 00:11:37.148 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:37.148 "is_configured": true, 00:11:37.149 "data_offset": 0, 00:11:37.149 "data_size": 65536 00:11:37.149 } 00:11:37.149 ] 00:11:37.149 }' 00:11:37.149 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.149 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.406 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:37.406 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.406 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.406 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.664 [2024-12-06 18:08:49.610777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.664 "name": "Existed_Raid", 00:11:37.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.664 "strip_size_kb": 64, 00:11:37.664 "state": "configuring", 00:11:37.664 "raid_level": "concat", 00:11:37.664 "superblock": false, 00:11:37.664 "num_base_bdevs": 4, 00:11:37.664 "num_base_bdevs_discovered": 3, 00:11:37.664 "num_base_bdevs_operational": 4, 00:11:37.664 "base_bdevs_list": [ 00:11:37.664 { 00:11:37.664 "name": "BaseBdev1", 00:11:37.664 "uuid": "50edacf1-f98d-448e-ab6f-4f2a7f507492", 00:11:37.664 "is_configured": true, 00:11:37.664 "data_offset": 0, 00:11:37.664 "data_size": 65536 00:11:37.664 }, 00:11:37.664 { 00:11:37.664 "name": null, 00:11:37.664 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:37.664 "is_configured": false, 00:11:37.664 "data_offset": 0, 00:11:37.664 "data_size": 65536 00:11:37.664 }, 00:11:37.664 { 00:11:37.664 "name": "BaseBdev3", 00:11:37.664 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:37.664 "is_configured": true, 00:11:37.664 "data_offset": 0, 00:11:37.664 "data_size": 65536 00:11:37.664 }, 00:11:37.664 { 00:11:37.664 "name": "BaseBdev4", 00:11:37.664 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:37.664 "is_configured": true, 00:11:37.664 "data_offset": 0, 00:11:37.664 "data_size": 65536 00:11:37.664 } 00:11:37.664 ] 00:11:37.664 }' 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.664 18:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.923 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:37.923 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.923 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.923 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.923 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.923 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:37.923 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.923 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.923 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.923 [2024-12-06 18:08:50.082024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.245 "name": "Existed_Raid", 00:11:38.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.245 "strip_size_kb": 64, 00:11:38.245 "state": "configuring", 00:11:38.245 "raid_level": "concat", 00:11:38.245 "superblock": false, 00:11:38.245 "num_base_bdevs": 4, 00:11:38.245 "num_base_bdevs_discovered": 2, 00:11:38.245 "num_base_bdevs_operational": 4, 00:11:38.245 "base_bdevs_list": [ 00:11:38.245 { 00:11:38.245 "name": null, 00:11:38.245 "uuid": "50edacf1-f98d-448e-ab6f-4f2a7f507492", 00:11:38.245 "is_configured": false, 00:11:38.245 "data_offset": 0, 00:11:38.245 "data_size": 65536 00:11:38.245 }, 00:11:38.245 { 00:11:38.245 "name": null, 00:11:38.245 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:38.245 "is_configured": false, 00:11:38.245 "data_offset": 0, 00:11:38.245 "data_size": 65536 00:11:38.245 }, 00:11:38.245 { 00:11:38.245 "name": "BaseBdev3", 00:11:38.245 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:38.245 "is_configured": true, 00:11:38.245 "data_offset": 0, 00:11:38.245 "data_size": 65536 00:11:38.245 }, 00:11:38.245 { 00:11:38.245 "name": "BaseBdev4", 00:11:38.245 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:38.245 "is_configured": true, 00:11:38.245 "data_offset": 0, 00:11:38.245 "data_size": 65536 00:11:38.245 } 00:11:38.245 ] 00:11:38.245 }' 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.245 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.503 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.503 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.503 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.503 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.761 [2024-12-06 18:08:50.694096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.761 "name": "Existed_Raid", 00:11:38.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.761 "strip_size_kb": 64, 00:11:38.761 "state": "configuring", 00:11:38.761 "raid_level": "concat", 00:11:38.761 "superblock": false, 00:11:38.761 "num_base_bdevs": 4, 00:11:38.761 "num_base_bdevs_discovered": 3, 00:11:38.761 "num_base_bdevs_operational": 4, 00:11:38.761 "base_bdevs_list": [ 00:11:38.761 { 00:11:38.761 "name": null, 00:11:38.761 "uuid": "50edacf1-f98d-448e-ab6f-4f2a7f507492", 00:11:38.761 "is_configured": false, 00:11:38.761 "data_offset": 0, 00:11:38.761 "data_size": 65536 00:11:38.761 }, 00:11:38.761 { 00:11:38.761 "name": "BaseBdev2", 00:11:38.761 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:38.761 "is_configured": true, 00:11:38.761 "data_offset": 0, 00:11:38.761 "data_size": 65536 00:11:38.761 }, 00:11:38.761 { 00:11:38.761 "name": "BaseBdev3", 00:11:38.761 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:38.761 "is_configured": true, 00:11:38.761 "data_offset": 0, 00:11:38.761 "data_size": 65536 00:11:38.761 }, 00:11:38.761 { 00:11:38.761 "name": "BaseBdev4", 00:11:38.761 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:38.761 "is_configured": true, 00:11:38.761 "data_offset": 0, 00:11:38.761 "data_size": 65536 00:11:38.761 } 00:11:38.761 ] 00:11:38.761 }' 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.761 18:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.019 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.020 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.020 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.020 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.020 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.278 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 50edacf1-f98d-448e-ab6f-4f2a7f507492 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.279 [2024-12-06 18:08:51.290096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:39.279 [2024-12-06 18:08:51.290163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:39.279 [2024-12-06 18:08:51.290171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:39.279 [2024-12-06 18:08:51.290445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:39.279 [2024-12-06 18:08:51.290593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:39.279 [2024-12-06 18:08:51.290612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:39.279 [2024-12-06 18:08:51.290853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.279 NewBaseBdev 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.279 [ 00:11:39.279 { 00:11:39.279 "name": "NewBaseBdev", 00:11:39.279 "aliases": [ 00:11:39.279 "50edacf1-f98d-448e-ab6f-4f2a7f507492" 00:11:39.279 ], 00:11:39.279 "product_name": "Malloc disk", 00:11:39.279 "block_size": 512, 00:11:39.279 "num_blocks": 65536, 00:11:39.279 "uuid": "50edacf1-f98d-448e-ab6f-4f2a7f507492", 00:11:39.279 "assigned_rate_limits": { 00:11:39.279 "rw_ios_per_sec": 0, 00:11:39.279 "rw_mbytes_per_sec": 0, 00:11:39.279 "r_mbytes_per_sec": 0, 00:11:39.279 "w_mbytes_per_sec": 0 00:11:39.279 }, 00:11:39.279 "claimed": true, 00:11:39.279 "claim_type": "exclusive_write", 00:11:39.279 "zoned": false, 00:11:39.279 "supported_io_types": { 00:11:39.279 "read": true, 00:11:39.279 "write": true, 00:11:39.279 "unmap": true, 00:11:39.279 "flush": true, 00:11:39.279 "reset": true, 00:11:39.279 "nvme_admin": false, 00:11:39.279 "nvme_io": false, 00:11:39.279 "nvme_io_md": false, 00:11:39.279 "write_zeroes": true, 00:11:39.279 "zcopy": true, 00:11:39.279 "get_zone_info": false, 00:11:39.279 "zone_management": false, 00:11:39.279 "zone_append": false, 00:11:39.279 "compare": false, 00:11:39.279 "compare_and_write": false, 00:11:39.279 "abort": true, 00:11:39.279 "seek_hole": false, 00:11:39.279 "seek_data": false, 00:11:39.279 "copy": true, 00:11:39.279 "nvme_iov_md": false 00:11:39.279 }, 00:11:39.279 "memory_domains": [ 00:11:39.279 { 00:11:39.279 "dma_device_id": "system", 00:11:39.279 "dma_device_type": 1 00:11:39.279 }, 00:11:39.279 { 00:11:39.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.279 "dma_device_type": 2 00:11:39.279 } 00:11:39.279 ], 00:11:39.279 "driver_specific": {} 00:11:39.279 } 00:11:39.279 ] 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.279 "name": "Existed_Raid", 00:11:39.279 "uuid": "cdfe4b84-2b71-4a7e-a272-ee5edfa68072", 00:11:39.279 "strip_size_kb": 64, 00:11:39.279 "state": "online", 00:11:39.279 "raid_level": "concat", 00:11:39.279 "superblock": false, 00:11:39.279 "num_base_bdevs": 4, 00:11:39.279 "num_base_bdevs_discovered": 4, 00:11:39.279 "num_base_bdevs_operational": 4, 00:11:39.279 "base_bdevs_list": [ 00:11:39.279 { 00:11:39.279 "name": "NewBaseBdev", 00:11:39.279 "uuid": "50edacf1-f98d-448e-ab6f-4f2a7f507492", 00:11:39.279 "is_configured": true, 00:11:39.279 "data_offset": 0, 00:11:39.279 "data_size": 65536 00:11:39.279 }, 00:11:39.279 { 00:11:39.279 "name": "BaseBdev2", 00:11:39.279 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:39.279 "is_configured": true, 00:11:39.279 "data_offset": 0, 00:11:39.279 "data_size": 65536 00:11:39.279 }, 00:11:39.279 { 00:11:39.279 "name": "BaseBdev3", 00:11:39.279 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:39.279 "is_configured": true, 00:11:39.279 "data_offset": 0, 00:11:39.279 "data_size": 65536 00:11:39.279 }, 00:11:39.279 { 00:11:39.279 "name": "BaseBdev4", 00:11:39.279 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:39.279 "is_configured": true, 00:11:39.279 "data_offset": 0, 00:11:39.279 "data_size": 65536 00:11:39.279 } 00:11:39.279 ] 00:11:39.279 }' 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.279 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.846 [2024-12-06 18:08:51.817613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.846 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.846 "name": "Existed_Raid", 00:11:39.846 "aliases": [ 00:11:39.846 "cdfe4b84-2b71-4a7e-a272-ee5edfa68072" 00:11:39.846 ], 00:11:39.846 "product_name": "Raid Volume", 00:11:39.846 "block_size": 512, 00:11:39.846 "num_blocks": 262144, 00:11:39.846 "uuid": "cdfe4b84-2b71-4a7e-a272-ee5edfa68072", 00:11:39.846 "assigned_rate_limits": { 00:11:39.846 "rw_ios_per_sec": 0, 00:11:39.846 "rw_mbytes_per_sec": 0, 00:11:39.846 "r_mbytes_per_sec": 0, 00:11:39.846 "w_mbytes_per_sec": 0 00:11:39.846 }, 00:11:39.846 "claimed": false, 00:11:39.846 "zoned": false, 00:11:39.846 "supported_io_types": { 00:11:39.846 "read": true, 00:11:39.846 "write": true, 00:11:39.846 "unmap": true, 00:11:39.846 "flush": true, 00:11:39.846 "reset": true, 00:11:39.846 "nvme_admin": false, 00:11:39.846 "nvme_io": false, 00:11:39.846 "nvme_io_md": false, 00:11:39.846 "write_zeroes": true, 00:11:39.846 "zcopy": false, 00:11:39.846 "get_zone_info": false, 00:11:39.846 "zone_management": false, 00:11:39.846 "zone_append": false, 00:11:39.846 "compare": false, 00:11:39.846 "compare_and_write": false, 00:11:39.846 "abort": false, 00:11:39.846 "seek_hole": false, 00:11:39.846 "seek_data": false, 00:11:39.846 "copy": false, 00:11:39.846 "nvme_iov_md": false 00:11:39.846 }, 00:11:39.846 "memory_domains": [ 00:11:39.846 { 00:11:39.846 "dma_device_id": "system", 00:11:39.846 "dma_device_type": 1 00:11:39.846 }, 00:11:39.846 { 00:11:39.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.846 "dma_device_type": 2 00:11:39.846 }, 00:11:39.846 { 00:11:39.846 "dma_device_id": "system", 00:11:39.846 "dma_device_type": 1 00:11:39.846 }, 00:11:39.846 { 00:11:39.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.846 "dma_device_type": 2 00:11:39.846 }, 00:11:39.846 { 00:11:39.846 "dma_device_id": "system", 00:11:39.846 "dma_device_type": 1 00:11:39.846 }, 00:11:39.846 { 00:11:39.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.846 "dma_device_type": 2 00:11:39.846 }, 00:11:39.846 { 00:11:39.846 "dma_device_id": "system", 00:11:39.846 "dma_device_type": 1 00:11:39.846 }, 00:11:39.846 { 00:11:39.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.846 "dma_device_type": 2 00:11:39.846 } 00:11:39.846 ], 00:11:39.846 "driver_specific": { 00:11:39.846 "raid": { 00:11:39.846 "uuid": "cdfe4b84-2b71-4a7e-a272-ee5edfa68072", 00:11:39.846 "strip_size_kb": 64, 00:11:39.846 "state": "online", 00:11:39.846 "raid_level": "concat", 00:11:39.846 "superblock": false, 00:11:39.846 "num_base_bdevs": 4, 00:11:39.846 "num_base_bdevs_discovered": 4, 00:11:39.846 "num_base_bdevs_operational": 4, 00:11:39.846 "base_bdevs_list": [ 00:11:39.847 { 00:11:39.847 "name": "NewBaseBdev", 00:11:39.847 "uuid": "50edacf1-f98d-448e-ab6f-4f2a7f507492", 00:11:39.847 "is_configured": true, 00:11:39.847 "data_offset": 0, 00:11:39.847 "data_size": 65536 00:11:39.847 }, 00:11:39.847 { 00:11:39.847 "name": "BaseBdev2", 00:11:39.847 "uuid": "0a43a444-8e13-4a64-86ac-e3d2697604de", 00:11:39.847 "is_configured": true, 00:11:39.847 "data_offset": 0, 00:11:39.847 "data_size": 65536 00:11:39.847 }, 00:11:39.847 { 00:11:39.847 "name": "BaseBdev3", 00:11:39.847 "uuid": "f9a55834-b73c-44e8-aa49-f0ed992720e0", 00:11:39.847 "is_configured": true, 00:11:39.847 "data_offset": 0, 00:11:39.847 "data_size": 65536 00:11:39.847 }, 00:11:39.847 { 00:11:39.847 "name": "BaseBdev4", 00:11:39.847 "uuid": "a158e4b3-a16b-4712-9e1f-29dbbb5fe52f", 00:11:39.847 "is_configured": true, 00:11:39.847 "data_offset": 0, 00:11:39.847 "data_size": 65536 00:11:39.847 } 00:11:39.847 ] 00:11:39.847 } 00:11:39.847 } 00:11:39.847 }' 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:39.847 BaseBdev2 00:11:39.847 BaseBdev3 00:11:39.847 BaseBdev4' 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.847 18:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.105 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.106 [2024-12-06 18:08:52.128713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.106 [2024-12-06 18:08:52.128748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.106 [2024-12-06 18:08:52.128832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.106 [2024-12-06 18:08:52.128905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.106 [2024-12-06 18:08:52.128916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71760 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71760 ']' 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71760 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71760 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.106 killing process with pid 71760 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71760' 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71760 00:11:40.106 [2024-12-06 18:08:52.174319] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.106 18:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71760 00:11:40.673 [2024-12-06 18:08:52.600490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.052 18:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:42.052 00:11:42.052 real 0m11.864s 00:11:42.052 user 0m18.882s 00:11:42.052 sys 0m1.995s 00:11:42.052 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.052 18:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.052 ************************************ 00:11:42.052 END TEST raid_state_function_test 00:11:42.052 ************************************ 00:11:42.052 18:08:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:42.052 18:08:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:42.052 18:08:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.052 18:08:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.053 ************************************ 00:11:42.053 START TEST raid_state_function_test_sb 00:11:42.053 ************************************ 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72437 00:11:42.053 Process raid pid: 72437 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72437' 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72437 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72437 ']' 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.053 18:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.053 [2024-12-06 18:08:53.958796] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:11:42.053 [2024-12-06 18:08:53.958910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.053 [2024-12-06 18:08:54.137725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.313 [2024-12-06 18:08:54.256532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.313 [2024-12-06 18:08:54.476268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.313 [2024-12-06 18:08:54.476315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 [2024-12-06 18:08:54.863156] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.884 [2024-12-06 18:08:54.863232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.884 [2024-12-06 18:08:54.863245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.884 [2024-12-06 18:08:54.863256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.884 [2024-12-06 18:08:54.863269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.884 [2024-12-06 18:08:54.863280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.884 [2024-12-06 18:08:54.863287] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.884 [2024-12-06 18:08:54.863297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.884 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.884 "name": "Existed_Raid", 00:11:42.884 "uuid": "75910917-82ac-4119-b5af-8f4e3757bfca", 00:11:42.884 "strip_size_kb": 64, 00:11:42.884 "state": "configuring", 00:11:42.884 "raid_level": "concat", 00:11:42.885 "superblock": true, 00:11:42.885 "num_base_bdevs": 4, 00:11:42.885 "num_base_bdevs_discovered": 0, 00:11:42.885 "num_base_bdevs_operational": 4, 00:11:42.885 "base_bdevs_list": [ 00:11:42.885 { 00:11:42.885 "name": "BaseBdev1", 00:11:42.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.885 "is_configured": false, 00:11:42.885 "data_offset": 0, 00:11:42.885 "data_size": 0 00:11:42.885 }, 00:11:42.885 { 00:11:42.885 "name": "BaseBdev2", 00:11:42.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.885 "is_configured": false, 00:11:42.885 "data_offset": 0, 00:11:42.885 "data_size": 0 00:11:42.885 }, 00:11:42.885 { 00:11:42.885 "name": "BaseBdev3", 00:11:42.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.885 "is_configured": false, 00:11:42.885 "data_offset": 0, 00:11:42.885 "data_size": 0 00:11:42.885 }, 00:11:42.885 { 00:11:42.885 "name": "BaseBdev4", 00:11:42.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.885 "is_configured": false, 00:11:42.885 "data_offset": 0, 00:11:42.885 "data_size": 0 00:11:42.885 } 00:11:42.885 ] 00:11:42.885 }' 00:11:42.885 18:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.885 18:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 [2024-12-06 18:08:55.386177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.454 [2024-12-06 18:08:55.386227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 [2024-12-06 18:08:55.398157] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.454 [2024-12-06 18:08:55.398218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.454 [2024-12-06 18:08:55.398227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.454 [2024-12-06 18:08:55.398236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.454 [2024-12-06 18:08:55.398243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.454 [2024-12-06 18:08:55.398251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.454 [2024-12-06 18:08:55.398258] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:43.454 [2024-12-06 18:08:55.398267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 [2024-12-06 18:08:55.446960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.454 BaseBdev1 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.454 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 [ 00:11:43.454 { 00:11:43.454 "name": "BaseBdev1", 00:11:43.454 "aliases": [ 00:11:43.454 "fd537dc4-2bee-4e01-a5eb-57cec9089394" 00:11:43.454 ], 00:11:43.454 "product_name": "Malloc disk", 00:11:43.454 "block_size": 512, 00:11:43.454 "num_blocks": 65536, 00:11:43.454 "uuid": "fd537dc4-2bee-4e01-a5eb-57cec9089394", 00:11:43.454 "assigned_rate_limits": { 00:11:43.454 "rw_ios_per_sec": 0, 00:11:43.454 "rw_mbytes_per_sec": 0, 00:11:43.454 "r_mbytes_per_sec": 0, 00:11:43.454 "w_mbytes_per_sec": 0 00:11:43.454 }, 00:11:43.454 "claimed": true, 00:11:43.454 "claim_type": "exclusive_write", 00:11:43.454 "zoned": false, 00:11:43.454 "supported_io_types": { 00:11:43.454 "read": true, 00:11:43.454 "write": true, 00:11:43.454 "unmap": true, 00:11:43.454 "flush": true, 00:11:43.454 "reset": true, 00:11:43.454 "nvme_admin": false, 00:11:43.454 "nvme_io": false, 00:11:43.454 "nvme_io_md": false, 00:11:43.454 "write_zeroes": true, 00:11:43.454 "zcopy": true, 00:11:43.454 "get_zone_info": false, 00:11:43.454 "zone_management": false, 00:11:43.454 "zone_append": false, 00:11:43.454 "compare": false, 00:11:43.454 "compare_and_write": false, 00:11:43.454 "abort": true, 00:11:43.454 "seek_hole": false, 00:11:43.454 "seek_data": false, 00:11:43.454 "copy": true, 00:11:43.454 "nvme_iov_md": false 00:11:43.454 }, 00:11:43.454 "memory_domains": [ 00:11:43.454 { 00:11:43.454 "dma_device_id": "system", 00:11:43.454 "dma_device_type": 1 00:11:43.454 }, 00:11:43.454 { 00:11:43.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.455 "dma_device_type": 2 00:11:43.455 } 00:11:43.455 ], 00:11:43.455 "driver_specific": {} 00:11:43.455 } 00:11:43.455 ] 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.455 "name": "Existed_Raid", 00:11:43.455 "uuid": "a583cad3-944e-4ced-b341-7378c04ac1db", 00:11:43.455 "strip_size_kb": 64, 00:11:43.455 "state": "configuring", 00:11:43.455 "raid_level": "concat", 00:11:43.455 "superblock": true, 00:11:43.455 "num_base_bdevs": 4, 00:11:43.455 "num_base_bdevs_discovered": 1, 00:11:43.455 "num_base_bdevs_operational": 4, 00:11:43.455 "base_bdevs_list": [ 00:11:43.455 { 00:11:43.455 "name": "BaseBdev1", 00:11:43.455 "uuid": "fd537dc4-2bee-4e01-a5eb-57cec9089394", 00:11:43.455 "is_configured": true, 00:11:43.455 "data_offset": 2048, 00:11:43.455 "data_size": 63488 00:11:43.455 }, 00:11:43.455 { 00:11:43.455 "name": "BaseBdev2", 00:11:43.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.455 "is_configured": false, 00:11:43.455 "data_offset": 0, 00:11:43.455 "data_size": 0 00:11:43.455 }, 00:11:43.455 { 00:11:43.455 "name": "BaseBdev3", 00:11:43.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.455 "is_configured": false, 00:11:43.455 "data_offset": 0, 00:11:43.455 "data_size": 0 00:11:43.455 }, 00:11:43.455 { 00:11:43.455 "name": "BaseBdev4", 00:11:43.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.455 "is_configured": false, 00:11:43.455 "data_offset": 0, 00:11:43.455 "data_size": 0 00:11:43.455 } 00:11:43.455 ] 00:11:43.455 }' 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.455 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.027 [2024-12-06 18:08:55.922239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.027 [2024-12-06 18:08:55.922313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.027 [2024-12-06 18:08:55.934262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.027 [2024-12-06 18:08:55.936096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.027 [2024-12-06 18:08:55.936138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.027 [2024-12-06 18:08:55.936148] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.027 [2024-12-06 18:08:55.936158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.027 [2024-12-06 18:08:55.936165] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:44.027 [2024-12-06 18:08:55.936173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.027 "name": "Existed_Raid", 00:11:44.027 "uuid": "ff2ad545-c503-48dd-b61e-538ef67b52e3", 00:11:44.027 "strip_size_kb": 64, 00:11:44.027 "state": "configuring", 00:11:44.027 "raid_level": "concat", 00:11:44.027 "superblock": true, 00:11:44.027 "num_base_bdevs": 4, 00:11:44.027 "num_base_bdevs_discovered": 1, 00:11:44.027 "num_base_bdevs_operational": 4, 00:11:44.027 "base_bdevs_list": [ 00:11:44.027 { 00:11:44.027 "name": "BaseBdev1", 00:11:44.027 "uuid": "fd537dc4-2bee-4e01-a5eb-57cec9089394", 00:11:44.027 "is_configured": true, 00:11:44.027 "data_offset": 2048, 00:11:44.027 "data_size": 63488 00:11:44.027 }, 00:11:44.027 { 00:11:44.027 "name": "BaseBdev2", 00:11:44.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.027 "is_configured": false, 00:11:44.027 "data_offset": 0, 00:11:44.027 "data_size": 0 00:11:44.027 }, 00:11:44.027 { 00:11:44.027 "name": "BaseBdev3", 00:11:44.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.027 "is_configured": false, 00:11:44.027 "data_offset": 0, 00:11:44.027 "data_size": 0 00:11:44.027 }, 00:11:44.027 { 00:11:44.027 "name": "BaseBdev4", 00:11:44.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.027 "is_configured": false, 00:11:44.027 "data_offset": 0, 00:11:44.027 "data_size": 0 00:11:44.027 } 00:11:44.027 ] 00:11:44.027 }' 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.027 18:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.288 [2024-12-06 18:08:56.444386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.288 BaseBdev2 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.288 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.549 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.549 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.549 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.549 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.549 [ 00:11:44.549 { 00:11:44.549 "name": "BaseBdev2", 00:11:44.549 "aliases": [ 00:11:44.549 "757bd16d-1ca6-4f7b-aabd-7c1a1de30331" 00:11:44.549 ], 00:11:44.549 "product_name": "Malloc disk", 00:11:44.549 "block_size": 512, 00:11:44.549 "num_blocks": 65536, 00:11:44.549 "uuid": "757bd16d-1ca6-4f7b-aabd-7c1a1de30331", 00:11:44.549 "assigned_rate_limits": { 00:11:44.549 "rw_ios_per_sec": 0, 00:11:44.549 "rw_mbytes_per_sec": 0, 00:11:44.549 "r_mbytes_per_sec": 0, 00:11:44.549 "w_mbytes_per_sec": 0 00:11:44.549 }, 00:11:44.549 "claimed": true, 00:11:44.549 "claim_type": "exclusive_write", 00:11:44.549 "zoned": false, 00:11:44.549 "supported_io_types": { 00:11:44.549 "read": true, 00:11:44.549 "write": true, 00:11:44.549 "unmap": true, 00:11:44.550 "flush": true, 00:11:44.550 "reset": true, 00:11:44.550 "nvme_admin": false, 00:11:44.550 "nvme_io": false, 00:11:44.550 "nvme_io_md": false, 00:11:44.550 "write_zeroes": true, 00:11:44.550 "zcopy": true, 00:11:44.550 "get_zone_info": false, 00:11:44.550 "zone_management": false, 00:11:44.550 "zone_append": false, 00:11:44.550 "compare": false, 00:11:44.550 "compare_and_write": false, 00:11:44.550 "abort": true, 00:11:44.550 "seek_hole": false, 00:11:44.550 "seek_data": false, 00:11:44.550 "copy": true, 00:11:44.550 "nvme_iov_md": false 00:11:44.550 }, 00:11:44.550 "memory_domains": [ 00:11:44.550 { 00:11:44.550 "dma_device_id": "system", 00:11:44.550 "dma_device_type": 1 00:11:44.550 }, 00:11:44.550 { 00:11:44.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.550 "dma_device_type": 2 00:11:44.550 } 00:11:44.550 ], 00:11:44.550 "driver_specific": {} 00:11:44.550 } 00:11:44.550 ] 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.550 "name": "Existed_Raid", 00:11:44.550 "uuid": "ff2ad545-c503-48dd-b61e-538ef67b52e3", 00:11:44.550 "strip_size_kb": 64, 00:11:44.550 "state": "configuring", 00:11:44.550 "raid_level": "concat", 00:11:44.550 "superblock": true, 00:11:44.550 "num_base_bdevs": 4, 00:11:44.550 "num_base_bdevs_discovered": 2, 00:11:44.550 "num_base_bdevs_operational": 4, 00:11:44.550 "base_bdevs_list": [ 00:11:44.550 { 00:11:44.550 "name": "BaseBdev1", 00:11:44.550 "uuid": "fd537dc4-2bee-4e01-a5eb-57cec9089394", 00:11:44.550 "is_configured": true, 00:11:44.550 "data_offset": 2048, 00:11:44.550 "data_size": 63488 00:11:44.550 }, 00:11:44.550 { 00:11:44.550 "name": "BaseBdev2", 00:11:44.550 "uuid": "757bd16d-1ca6-4f7b-aabd-7c1a1de30331", 00:11:44.550 "is_configured": true, 00:11:44.550 "data_offset": 2048, 00:11:44.550 "data_size": 63488 00:11:44.550 }, 00:11:44.550 { 00:11:44.550 "name": "BaseBdev3", 00:11:44.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.550 "is_configured": false, 00:11:44.550 "data_offset": 0, 00:11:44.550 "data_size": 0 00:11:44.550 }, 00:11:44.550 { 00:11:44.550 "name": "BaseBdev4", 00:11:44.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.550 "is_configured": false, 00:11:44.550 "data_offset": 0, 00:11:44.550 "data_size": 0 00:11:44.550 } 00:11:44.550 ] 00:11:44.550 }' 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.550 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.810 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.810 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.810 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.070 [2024-12-06 18:08:56.994627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.070 BaseBdev3 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.070 18:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.070 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.070 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:45.070 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.070 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.070 [ 00:11:45.070 { 00:11:45.070 "name": "BaseBdev3", 00:11:45.070 "aliases": [ 00:11:45.070 "8b8794e8-4f2b-4788-97c4-d62826d35d50" 00:11:45.070 ], 00:11:45.070 "product_name": "Malloc disk", 00:11:45.070 "block_size": 512, 00:11:45.070 "num_blocks": 65536, 00:11:45.070 "uuid": "8b8794e8-4f2b-4788-97c4-d62826d35d50", 00:11:45.070 "assigned_rate_limits": { 00:11:45.070 "rw_ios_per_sec": 0, 00:11:45.070 "rw_mbytes_per_sec": 0, 00:11:45.070 "r_mbytes_per_sec": 0, 00:11:45.070 "w_mbytes_per_sec": 0 00:11:45.070 }, 00:11:45.070 "claimed": true, 00:11:45.070 "claim_type": "exclusive_write", 00:11:45.070 "zoned": false, 00:11:45.070 "supported_io_types": { 00:11:45.070 "read": true, 00:11:45.070 "write": true, 00:11:45.070 "unmap": true, 00:11:45.070 "flush": true, 00:11:45.070 "reset": true, 00:11:45.070 "nvme_admin": false, 00:11:45.070 "nvme_io": false, 00:11:45.070 "nvme_io_md": false, 00:11:45.070 "write_zeroes": true, 00:11:45.070 "zcopy": true, 00:11:45.070 "get_zone_info": false, 00:11:45.070 "zone_management": false, 00:11:45.070 "zone_append": false, 00:11:45.070 "compare": false, 00:11:45.070 "compare_and_write": false, 00:11:45.070 "abort": true, 00:11:45.070 "seek_hole": false, 00:11:45.070 "seek_data": false, 00:11:45.070 "copy": true, 00:11:45.070 "nvme_iov_md": false 00:11:45.070 }, 00:11:45.070 "memory_domains": [ 00:11:45.070 { 00:11:45.071 "dma_device_id": "system", 00:11:45.071 "dma_device_type": 1 00:11:45.071 }, 00:11:45.071 { 00:11:45.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.071 "dma_device_type": 2 00:11:45.071 } 00:11:45.071 ], 00:11:45.071 "driver_specific": {} 00:11:45.071 } 00:11:45.071 ] 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.071 "name": "Existed_Raid", 00:11:45.071 "uuid": "ff2ad545-c503-48dd-b61e-538ef67b52e3", 00:11:45.071 "strip_size_kb": 64, 00:11:45.071 "state": "configuring", 00:11:45.071 "raid_level": "concat", 00:11:45.071 "superblock": true, 00:11:45.071 "num_base_bdevs": 4, 00:11:45.071 "num_base_bdevs_discovered": 3, 00:11:45.071 "num_base_bdevs_operational": 4, 00:11:45.071 "base_bdevs_list": [ 00:11:45.071 { 00:11:45.071 "name": "BaseBdev1", 00:11:45.071 "uuid": "fd537dc4-2bee-4e01-a5eb-57cec9089394", 00:11:45.071 "is_configured": true, 00:11:45.071 "data_offset": 2048, 00:11:45.071 "data_size": 63488 00:11:45.071 }, 00:11:45.071 { 00:11:45.071 "name": "BaseBdev2", 00:11:45.071 "uuid": "757bd16d-1ca6-4f7b-aabd-7c1a1de30331", 00:11:45.071 "is_configured": true, 00:11:45.071 "data_offset": 2048, 00:11:45.071 "data_size": 63488 00:11:45.071 }, 00:11:45.071 { 00:11:45.071 "name": "BaseBdev3", 00:11:45.071 "uuid": "8b8794e8-4f2b-4788-97c4-d62826d35d50", 00:11:45.071 "is_configured": true, 00:11:45.071 "data_offset": 2048, 00:11:45.071 "data_size": 63488 00:11:45.071 }, 00:11:45.071 { 00:11:45.071 "name": "BaseBdev4", 00:11:45.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.071 "is_configured": false, 00:11:45.071 "data_offset": 0, 00:11:45.071 "data_size": 0 00:11:45.071 } 00:11:45.071 ] 00:11:45.071 }' 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.071 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.330 [2024-12-06 18:08:57.468047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.330 [2024-12-06 18:08:57.468396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:45.330 [2024-12-06 18:08:57.468421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:45.330 [2024-12-06 18:08:57.468743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:45.330 BaseBdev4 00:11:45.330 [2024-12-06 18:08:57.468925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:45.330 [2024-12-06 18:08:57.468946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:45.330 [2024-12-06 18:08:57.469128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.330 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.590 [ 00:11:45.591 { 00:11:45.591 "name": "BaseBdev4", 00:11:45.591 "aliases": [ 00:11:45.591 "bc0dcb1a-b5c2-4a97-9127-34f8bfc52ed0" 00:11:45.591 ], 00:11:45.591 "product_name": "Malloc disk", 00:11:45.591 "block_size": 512, 00:11:45.591 "num_blocks": 65536, 00:11:45.591 "uuid": "bc0dcb1a-b5c2-4a97-9127-34f8bfc52ed0", 00:11:45.591 "assigned_rate_limits": { 00:11:45.591 "rw_ios_per_sec": 0, 00:11:45.591 "rw_mbytes_per_sec": 0, 00:11:45.591 "r_mbytes_per_sec": 0, 00:11:45.591 "w_mbytes_per_sec": 0 00:11:45.591 }, 00:11:45.591 "claimed": true, 00:11:45.591 "claim_type": "exclusive_write", 00:11:45.591 "zoned": false, 00:11:45.591 "supported_io_types": { 00:11:45.591 "read": true, 00:11:45.591 "write": true, 00:11:45.591 "unmap": true, 00:11:45.591 "flush": true, 00:11:45.591 "reset": true, 00:11:45.591 "nvme_admin": false, 00:11:45.591 "nvme_io": false, 00:11:45.591 "nvme_io_md": false, 00:11:45.591 "write_zeroes": true, 00:11:45.591 "zcopy": true, 00:11:45.591 "get_zone_info": false, 00:11:45.591 "zone_management": false, 00:11:45.591 "zone_append": false, 00:11:45.591 "compare": false, 00:11:45.591 "compare_and_write": false, 00:11:45.591 "abort": true, 00:11:45.591 "seek_hole": false, 00:11:45.591 "seek_data": false, 00:11:45.591 "copy": true, 00:11:45.591 "nvme_iov_md": false 00:11:45.591 }, 00:11:45.591 "memory_domains": [ 00:11:45.591 { 00:11:45.591 "dma_device_id": "system", 00:11:45.591 "dma_device_type": 1 00:11:45.591 }, 00:11:45.591 { 00:11:45.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.591 "dma_device_type": 2 00:11:45.591 } 00:11:45.591 ], 00:11:45.591 "driver_specific": {} 00:11:45.591 } 00:11:45.591 ] 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.591 "name": "Existed_Raid", 00:11:45.591 "uuid": "ff2ad545-c503-48dd-b61e-538ef67b52e3", 00:11:45.591 "strip_size_kb": 64, 00:11:45.591 "state": "online", 00:11:45.591 "raid_level": "concat", 00:11:45.591 "superblock": true, 00:11:45.591 "num_base_bdevs": 4, 00:11:45.591 "num_base_bdevs_discovered": 4, 00:11:45.591 "num_base_bdevs_operational": 4, 00:11:45.591 "base_bdevs_list": [ 00:11:45.591 { 00:11:45.591 "name": "BaseBdev1", 00:11:45.591 "uuid": "fd537dc4-2bee-4e01-a5eb-57cec9089394", 00:11:45.591 "is_configured": true, 00:11:45.591 "data_offset": 2048, 00:11:45.591 "data_size": 63488 00:11:45.591 }, 00:11:45.591 { 00:11:45.591 "name": "BaseBdev2", 00:11:45.591 "uuid": "757bd16d-1ca6-4f7b-aabd-7c1a1de30331", 00:11:45.591 "is_configured": true, 00:11:45.591 "data_offset": 2048, 00:11:45.591 "data_size": 63488 00:11:45.591 }, 00:11:45.591 { 00:11:45.591 "name": "BaseBdev3", 00:11:45.591 "uuid": "8b8794e8-4f2b-4788-97c4-d62826d35d50", 00:11:45.591 "is_configured": true, 00:11:45.591 "data_offset": 2048, 00:11:45.591 "data_size": 63488 00:11:45.591 }, 00:11:45.591 { 00:11:45.591 "name": "BaseBdev4", 00:11:45.591 "uuid": "bc0dcb1a-b5c2-4a97-9127-34f8bfc52ed0", 00:11:45.591 "is_configured": true, 00:11:45.591 "data_offset": 2048, 00:11:45.591 "data_size": 63488 00:11:45.591 } 00:11:45.591 ] 00:11:45.591 }' 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.591 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.850 [2024-12-06 18:08:57.951755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.850 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.850 "name": "Existed_Raid", 00:11:45.850 "aliases": [ 00:11:45.850 "ff2ad545-c503-48dd-b61e-538ef67b52e3" 00:11:45.850 ], 00:11:45.850 "product_name": "Raid Volume", 00:11:45.850 "block_size": 512, 00:11:45.850 "num_blocks": 253952, 00:11:45.850 "uuid": "ff2ad545-c503-48dd-b61e-538ef67b52e3", 00:11:45.850 "assigned_rate_limits": { 00:11:45.850 "rw_ios_per_sec": 0, 00:11:45.850 "rw_mbytes_per_sec": 0, 00:11:45.850 "r_mbytes_per_sec": 0, 00:11:45.850 "w_mbytes_per_sec": 0 00:11:45.850 }, 00:11:45.850 "claimed": false, 00:11:45.850 "zoned": false, 00:11:45.850 "supported_io_types": { 00:11:45.850 "read": true, 00:11:45.850 "write": true, 00:11:45.850 "unmap": true, 00:11:45.850 "flush": true, 00:11:45.850 "reset": true, 00:11:45.850 "nvme_admin": false, 00:11:45.850 "nvme_io": false, 00:11:45.850 "nvme_io_md": false, 00:11:45.851 "write_zeroes": true, 00:11:45.851 "zcopy": false, 00:11:45.851 "get_zone_info": false, 00:11:45.851 "zone_management": false, 00:11:45.851 "zone_append": false, 00:11:45.851 "compare": false, 00:11:45.851 "compare_and_write": false, 00:11:45.851 "abort": false, 00:11:45.851 "seek_hole": false, 00:11:45.851 "seek_data": false, 00:11:45.851 "copy": false, 00:11:45.851 "nvme_iov_md": false 00:11:45.851 }, 00:11:45.851 "memory_domains": [ 00:11:45.851 { 00:11:45.851 "dma_device_id": "system", 00:11:45.851 "dma_device_type": 1 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.851 "dma_device_type": 2 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "dma_device_id": "system", 00:11:45.851 "dma_device_type": 1 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.851 "dma_device_type": 2 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "dma_device_id": "system", 00:11:45.851 "dma_device_type": 1 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.851 "dma_device_type": 2 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "dma_device_id": "system", 00:11:45.851 "dma_device_type": 1 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.851 "dma_device_type": 2 00:11:45.851 } 00:11:45.851 ], 00:11:45.851 "driver_specific": { 00:11:45.851 "raid": { 00:11:45.851 "uuid": "ff2ad545-c503-48dd-b61e-538ef67b52e3", 00:11:45.851 "strip_size_kb": 64, 00:11:45.851 "state": "online", 00:11:45.851 "raid_level": "concat", 00:11:45.851 "superblock": true, 00:11:45.851 "num_base_bdevs": 4, 00:11:45.851 "num_base_bdevs_discovered": 4, 00:11:45.851 "num_base_bdevs_operational": 4, 00:11:45.851 "base_bdevs_list": [ 00:11:45.851 { 00:11:45.851 "name": "BaseBdev1", 00:11:45.851 "uuid": "fd537dc4-2bee-4e01-a5eb-57cec9089394", 00:11:45.851 "is_configured": true, 00:11:45.851 "data_offset": 2048, 00:11:45.851 "data_size": 63488 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "name": "BaseBdev2", 00:11:45.851 "uuid": "757bd16d-1ca6-4f7b-aabd-7c1a1de30331", 00:11:45.851 "is_configured": true, 00:11:45.851 "data_offset": 2048, 00:11:45.851 "data_size": 63488 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "name": "BaseBdev3", 00:11:45.851 "uuid": "8b8794e8-4f2b-4788-97c4-d62826d35d50", 00:11:45.851 "is_configured": true, 00:11:45.851 "data_offset": 2048, 00:11:45.851 "data_size": 63488 00:11:45.851 }, 00:11:45.851 { 00:11:45.851 "name": "BaseBdev4", 00:11:45.851 "uuid": "bc0dcb1a-b5c2-4a97-9127-34f8bfc52ed0", 00:11:45.851 "is_configured": true, 00:11:45.851 "data_offset": 2048, 00:11:45.851 "data_size": 63488 00:11:45.851 } 00:11:45.851 ] 00:11:45.851 } 00:11:45.851 } 00:11:45.851 }' 00:11:45.851 18:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:46.111 BaseBdev2 00:11:46.111 BaseBdev3 00:11:46.111 BaseBdev4' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.111 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.111 [2024-12-06 18:08:58.254893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.111 [2024-12-06 18:08:58.254929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.111 [2024-12-06 18:08:58.254984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.371 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.371 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:46.371 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:46.371 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.371 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:46.371 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:46.371 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:46.371 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.372 "name": "Existed_Raid", 00:11:46.372 "uuid": "ff2ad545-c503-48dd-b61e-538ef67b52e3", 00:11:46.372 "strip_size_kb": 64, 00:11:46.372 "state": "offline", 00:11:46.372 "raid_level": "concat", 00:11:46.372 "superblock": true, 00:11:46.372 "num_base_bdevs": 4, 00:11:46.372 "num_base_bdevs_discovered": 3, 00:11:46.372 "num_base_bdevs_operational": 3, 00:11:46.372 "base_bdevs_list": [ 00:11:46.372 { 00:11:46.372 "name": null, 00:11:46.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.372 "is_configured": false, 00:11:46.372 "data_offset": 0, 00:11:46.372 "data_size": 63488 00:11:46.372 }, 00:11:46.372 { 00:11:46.372 "name": "BaseBdev2", 00:11:46.372 "uuid": "757bd16d-1ca6-4f7b-aabd-7c1a1de30331", 00:11:46.372 "is_configured": true, 00:11:46.372 "data_offset": 2048, 00:11:46.372 "data_size": 63488 00:11:46.372 }, 00:11:46.372 { 00:11:46.372 "name": "BaseBdev3", 00:11:46.372 "uuid": "8b8794e8-4f2b-4788-97c4-d62826d35d50", 00:11:46.372 "is_configured": true, 00:11:46.372 "data_offset": 2048, 00:11:46.372 "data_size": 63488 00:11:46.372 }, 00:11:46.372 { 00:11:46.372 "name": "BaseBdev4", 00:11:46.372 "uuid": "bc0dcb1a-b5c2-4a97-9127-34f8bfc52ed0", 00:11:46.372 "is_configured": true, 00:11:46.372 "data_offset": 2048, 00:11:46.372 "data_size": 63488 00:11:46.372 } 00:11:46.372 ] 00:11:46.372 }' 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.372 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.676 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:46.676 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.676 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.676 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.676 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:46.676 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.676 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.935 [2024-12-06 18:08:58.842909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.935 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:46.936 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.936 18:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:46.936 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.936 18:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.936 [2024-12-06 18:08:59.000843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.194 [2024-12-06 18:08:59.169724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:47.194 [2024-12-06 18:08:59.169799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.194 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 BaseBdev2 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 [ 00:11:47.454 { 00:11:47.454 "name": "BaseBdev2", 00:11:47.454 "aliases": [ 00:11:47.454 "a95d4259-2e88-4fef-bab5-49ea64e18a8c" 00:11:47.454 ], 00:11:47.454 "product_name": "Malloc disk", 00:11:47.454 "block_size": 512, 00:11:47.454 "num_blocks": 65536, 00:11:47.454 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:47.454 "assigned_rate_limits": { 00:11:47.454 "rw_ios_per_sec": 0, 00:11:47.454 "rw_mbytes_per_sec": 0, 00:11:47.454 "r_mbytes_per_sec": 0, 00:11:47.454 "w_mbytes_per_sec": 0 00:11:47.454 }, 00:11:47.454 "claimed": false, 00:11:47.454 "zoned": false, 00:11:47.454 "supported_io_types": { 00:11:47.454 "read": true, 00:11:47.454 "write": true, 00:11:47.454 "unmap": true, 00:11:47.454 "flush": true, 00:11:47.454 "reset": true, 00:11:47.454 "nvme_admin": false, 00:11:47.454 "nvme_io": false, 00:11:47.454 "nvme_io_md": false, 00:11:47.454 "write_zeroes": true, 00:11:47.454 "zcopy": true, 00:11:47.454 "get_zone_info": false, 00:11:47.454 "zone_management": false, 00:11:47.454 "zone_append": false, 00:11:47.454 "compare": false, 00:11:47.454 "compare_and_write": false, 00:11:47.454 "abort": true, 00:11:47.454 "seek_hole": false, 00:11:47.454 "seek_data": false, 00:11:47.454 "copy": true, 00:11:47.454 "nvme_iov_md": false 00:11:47.454 }, 00:11:47.454 "memory_domains": [ 00:11:47.454 { 00:11:47.454 "dma_device_id": "system", 00:11:47.454 "dma_device_type": 1 00:11:47.454 }, 00:11:47.454 { 00:11:47.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.454 "dma_device_type": 2 00:11:47.454 } 00:11:47.454 ], 00:11:47.454 "driver_specific": {} 00:11:47.454 } 00:11:47.454 ] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 BaseBdev3 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 [ 00:11:47.454 { 00:11:47.454 "name": "BaseBdev3", 00:11:47.454 "aliases": [ 00:11:47.454 "b4f9dc69-24b4-4118-a7ba-72036184e511" 00:11:47.454 ], 00:11:47.454 "product_name": "Malloc disk", 00:11:47.454 "block_size": 512, 00:11:47.454 "num_blocks": 65536, 00:11:47.454 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:47.454 "assigned_rate_limits": { 00:11:47.454 "rw_ios_per_sec": 0, 00:11:47.454 "rw_mbytes_per_sec": 0, 00:11:47.454 "r_mbytes_per_sec": 0, 00:11:47.454 "w_mbytes_per_sec": 0 00:11:47.454 }, 00:11:47.454 "claimed": false, 00:11:47.454 "zoned": false, 00:11:47.454 "supported_io_types": { 00:11:47.454 "read": true, 00:11:47.454 "write": true, 00:11:47.454 "unmap": true, 00:11:47.454 "flush": true, 00:11:47.454 "reset": true, 00:11:47.454 "nvme_admin": false, 00:11:47.454 "nvme_io": false, 00:11:47.454 "nvme_io_md": false, 00:11:47.454 "write_zeroes": true, 00:11:47.454 "zcopy": true, 00:11:47.454 "get_zone_info": false, 00:11:47.454 "zone_management": false, 00:11:47.454 "zone_append": false, 00:11:47.454 "compare": false, 00:11:47.454 "compare_and_write": false, 00:11:47.454 "abort": true, 00:11:47.454 "seek_hole": false, 00:11:47.454 "seek_data": false, 00:11:47.454 "copy": true, 00:11:47.454 "nvme_iov_md": false 00:11:47.454 }, 00:11:47.454 "memory_domains": [ 00:11:47.454 { 00:11:47.454 "dma_device_id": "system", 00:11:47.454 "dma_device_type": 1 00:11:47.454 }, 00:11:47.454 { 00:11:47.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.454 "dma_device_type": 2 00:11:47.454 } 00:11:47.454 ], 00:11:47.454 "driver_specific": {} 00:11:47.454 } 00:11:47.454 ] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 BaseBdev4 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.454 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.454 [ 00:11:47.454 { 00:11:47.454 "name": "BaseBdev4", 00:11:47.454 "aliases": [ 00:11:47.454 "edf50c9c-58cf-4998-bb54-92c93b10fadd" 00:11:47.454 ], 00:11:47.454 "product_name": "Malloc disk", 00:11:47.454 "block_size": 512, 00:11:47.454 "num_blocks": 65536, 00:11:47.454 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:47.454 "assigned_rate_limits": { 00:11:47.454 "rw_ios_per_sec": 0, 00:11:47.454 "rw_mbytes_per_sec": 0, 00:11:47.454 "r_mbytes_per_sec": 0, 00:11:47.454 "w_mbytes_per_sec": 0 00:11:47.454 }, 00:11:47.454 "claimed": false, 00:11:47.454 "zoned": false, 00:11:47.454 "supported_io_types": { 00:11:47.454 "read": true, 00:11:47.454 "write": true, 00:11:47.454 "unmap": true, 00:11:47.454 "flush": true, 00:11:47.454 "reset": true, 00:11:47.455 "nvme_admin": false, 00:11:47.455 "nvme_io": false, 00:11:47.455 "nvme_io_md": false, 00:11:47.455 "write_zeroes": true, 00:11:47.455 "zcopy": true, 00:11:47.455 "get_zone_info": false, 00:11:47.455 "zone_management": false, 00:11:47.455 "zone_append": false, 00:11:47.455 "compare": false, 00:11:47.455 "compare_and_write": false, 00:11:47.455 "abort": true, 00:11:47.455 "seek_hole": false, 00:11:47.455 "seek_data": false, 00:11:47.455 "copy": true, 00:11:47.455 "nvme_iov_md": false 00:11:47.455 }, 00:11:47.455 "memory_domains": [ 00:11:47.455 { 00:11:47.455 "dma_device_id": "system", 00:11:47.455 "dma_device_type": 1 00:11:47.455 }, 00:11:47.455 { 00:11:47.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.455 "dma_device_type": 2 00:11:47.455 } 00:11:47.455 ], 00:11:47.455 "driver_specific": {} 00:11:47.455 } 00:11:47.455 ] 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.455 [2024-12-06 18:08:59.572385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.455 [2024-12-06 18:08:59.572433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.455 [2024-12-06 18:08:59.572457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.455 [2024-12-06 18:08:59.574328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.455 [2024-12-06 18:08:59.574396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.455 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.714 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.714 "name": "Existed_Raid", 00:11:47.714 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:47.714 "strip_size_kb": 64, 00:11:47.714 "state": "configuring", 00:11:47.714 "raid_level": "concat", 00:11:47.714 "superblock": true, 00:11:47.714 "num_base_bdevs": 4, 00:11:47.714 "num_base_bdevs_discovered": 3, 00:11:47.714 "num_base_bdevs_operational": 4, 00:11:47.714 "base_bdevs_list": [ 00:11:47.714 { 00:11:47.714 "name": "BaseBdev1", 00:11:47.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.714 "is_configured": false, 00:11:47.714 "data_offset": 0, 00:11:47.714 "data_size": 0 00:11:47.714 }, 00:11:47.714 { 00:11:47.714 "name": "BaseBdev2", 00:11:47.714 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:47.714 "is_configured": true, 00:11:47.714 "data_offset": 2048, 00:11:47.714 "data_size": 63488 00:11:47.714 }, 00:11:47.714 { 00:11:47.714 "name": "BaseBdev3", 00:11:47.714 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:47.714 "is_configured": true, 00:11:47.714 "data_offset": 2048, 00:11:47.714 "data_size": 63488 00:11:47.714 }, 00:11:47.714 { 00:11:47.714 "name": "BaseBdev4", 00:11:47.714 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:47.714 "is_configured": true, 00:11:47.714 "data_offset": 2048, 00:11:47.714 "data_size": 63488 00:11:47.714 } 00:11:47.714 ] 00:11:47.714 }' 00:11:47.714 18:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.714 18:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.973 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:47.973 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.973 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.973 [2024-12-06 18:09:00.031611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.973 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.973 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.973 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.973 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.973 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.974 "name": "Existed_Raid", 00:11:47.974 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:47.974 "strip_size_kb": 64, 00:11:47.974 "state": "configuring", 00:11:47.974 "raid_level": "concat", 00:11:47.974 "superblock": true, 00:11:47.974 "num_base_bdevs": 4, 00:11:47.974 "num_base_bdevs_discovered": 2, 00:11:47.974 "num_base_bdevs_operational": 4, 00:11:47.974 "base_bdevs_list": [ 00:11:47.974 { 00:11:47.974 "name": "BaseBdev1", 00:11:47.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.974 "is_configured": false, 00:11:47.974 "data_offset": 0, 00:11:47.974 "data_size": 0 00:11:47.974 }, 00:11:47.974 { 00:11:47.974 "name": null, 00:11:47.974 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:47.974 "is_configured": false, 00:11:47.974 "data_offset": 0, 00:11:47.974 "data_size": 63488 00:11:47.974 }, 00:11:47.974 { 00:11:47.974 "name": "BaseBdev3", 00:11:47.974 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:47.974 "is_configured": true, 00:11:47.974 "data_offset": 2048, 00:11:47.974 "data_size": 63488 00:11:47.974 }, 00:11:47.974 { 00:11:47.974 "name": "BaseBdev4", 00:11:47.974 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:47.974 "is_configured": true, 00:11:47.974 "data_offset": 2048, 00:11:47.974 "data_size": 63488 00:11:47.974 } 00:11:47.974 ] 00:11:47.974 }' 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.974 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.543 [2024-12-06 18:09:00.556530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.543 BaseBdev1 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.543 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.543 [ 00:11:48.543 { 00:11:48.543 "name": "BaseBdev1", 00:11:48.543 "aliases": [ 00:11:48.543 "3355b80b-84ae-4d65-bf6e-7ade139f185b" 00:11:48.543 ], 00:11:48.543 "product_name": "Malloc disk", 00:11:48.543 "block_size": 512, 00:11:48.543 "num_blocks": 65536, 00:11:48.543 "uuid": "3355b80b-84ae-4d65-bf6e-7ade139f185b", 00:11:48.543 "assigned_rate_limits": { 00:11:48.543 "rw_ios_per_sec": 0, 00:11:48.543 "rw_mbytes_per_sec": 0, 00:11:48.543 "r_mbytes_per_sec": 0, 00:11:48.543 "w_mbytes_per_sec": 0 00:11:48.543 }, 00:11:48.543 "claimed": true, 00:11:48.543 "claim_type": "exclusive_write", 00:11:48.543 "zoned": false, 00:11:48.543 "supported_io_types": { 00:11:48.543 "read": true, 00:11:48.543 "write": true, 00:11:48.543 "unmap": true, 00:11:48.543 "flush": true, 00:11:48.543 "reset": true, 00:11:48.543 "nvme_admin": false, 00:11:48.543 "nvme_io": false, 00:11:48.543 "nvme_io_md": false, 00:11:48.543 "write_zeroes": true, 00:11:48.543 "zcopy": true, 00:11:48.543 "get_zone_info": false, 00:11:48.543 "zone_management": false, 00:11:48.543 "zone_append": false, 00:11:48.543 "compare": false, 00:11:48.543 "compare_and_write": false, 00:11:48.543 "abort": true, 00:11:48.543 "seek_hole": false, 00:11:48.543 "seek_data": false, 00:11:48.543 "copy": true, 00:11:48.543 "nvme_iov_md": false 00:11:48.543 }, 00:11:48.543 "memory_domains": [ 00:11:48.543 { 00:11:48.543 "dma_device_id": "system", 00:11:48.543 "dma_device_type": 1 00:11:48.543 }, 00:11:48.543 { 00:11:48.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.543 "dma_device_type": 2 00:11:48.543 } 00:11:48.543 ], 00:11:48.544 "driver_specific": {} 00:11:48.544 } 00:11:48.544 ] 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.544 "name": "Existed_Raid", 00:11:48.544 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:48.544 "strip_size_kb": 64, 00:11:48.544 "state": "configuring", 00:11:48.544 "raid_level": "concat", 00:11:48.544 "superblock": true, 00:11:48.544 "num_base_bdevs": 4, 00:11:48.544 "num_base_bdevs_discovered": 3, 00:11:48.544 "num_base_bdevs_operational": 4, 00:11:48.544 "base_bdevs_list": [ 00:11:48.544 { 00:11:48.544 "name": "BaseBdev1", 00:11:48.544 "uuid": "3355b80b-84ae-4d65-bf6e-7ade139f185b", 00:11:48.544 "is_configured": true, 00:11:48.544 "data_offset": 2048, 00:11:48.544 "data_size": 63488 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "name": null, 00:11:48.544 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:48.544 "is_configured": false, 00:11:48.544 "data_offset": 0, 00:11:48.544 "data_size": 63488 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "name": "BaseBdev3", 00:11:48.544 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:48.544 "is_configured": true, 00:11:48.544 "data_offset": 2048, 00:11:48.544 "data_size": 63488 00:11:48.544 }, 00:11:48.544 { 00:11:48.544 "name": "BaseBdev4", 00:11:48.544 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:48.544 "is_configured": true, 00:11:48.544 "data_offset": 2048, 00:11:48.544 "data_size": 63488 00:11:48.544 } 00:11:48.544 ] 00:11:48.544 }' 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.544 18:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 [2024-12-06 18:09:01.119678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.113 "name": "Existed_Raid", 00:11:49.113 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:49.113 "strip_size_kb": 64, 00:11:49.113 "state": "configuring", 00:11:49.113 "raid_level": "concat", 00:11:49.113 "superblock": true, 00:11:49.113 "num_base_bdevs": 4, 00:11:49.113 "num_base_bdevs_discovered": 2, 00:11:49.113 "num_base_bdevs_operational": 4, 00:11:49.113 "base_bdevs_list": [ 00:11:49.113 { 00:11:49.113 "name": "BaseBdev1", 00:11:49.113 "uuid": "3355b80b-84ae-4d65-bf6e-7ade139f185b", 00:11:49.113 "is_configured": true, 00:11:49.113 "data_offset": 2048, 00:11:49.113 "data_size": 63488 00:11:49.113 }, 00:11:49.113 { 00:11:49.113 "name": null, 00:11:49.113 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:49.113 "is_configured": false, 00:11:49.113 "data_offset": 0, 00:11:49.113 "data_size": 63488 00:11:49.113 }, 00:11:49.113 { 00:11:49.113 "name": null, 00:11:49.113 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:49.113 "is_configured": false, 00:11:49.113 "data_offset": 0, 00:11:49.113 "data_size": 63488 00:11:49.113 }, 00:11:49.113 { 00:11:49.113 "name": "BaseBdev4", 00:11:49.113 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:49.113 "is_configured": true, 00:11:49.113 "data_offset": 2048, 00:11:49.113 "data_size": 63488 00:11:49.113 } 00:11:49.113 ] 00:11:49.113 }' 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.113 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.681 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.681 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.681 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.681 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.681 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.681 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.682 [2024-12-06 18:09:01.626842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.682 "name": "Existed_Raid", 00:11:49.682 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:49.682 "strip_size_kb": 64, 00:11:49.682 "state": "configuring", 00:11:49.682 "raid_level": "concat", 00:11:49.682 "superblock": true, 00:11:49.682 "num_base_bdevs": 4, 00:11:49.682 "num_base_bdevs_discovered": 3, 00:11:49.682 "num_base_bdevs_operational": 4, 00:11:49.682 "base_bdevs_list": [ 00:11:49.682 { 00:11:49.682 "name": "BaseBdev1", 00:11:49.682 "uuid": "3355b80b-84ae-4d65-bf6e-7ade139f185b", 00:11:49.682 "is_configured": true, 00:11:49.682 "data_offset": 2048, 00:11:49.682 "data_size": 63488 00:11:49.682 }, 00:11:49.682 { 00:11:49.682 "name": null, 00:11:49.682 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:49.682 "is_configured": false, 00:11:49.682 "data_offset": 0, 00:11:49.682 "data_size": 63488 00:11:49.682 }, 00:11:49.682 { 00:11:49.682 "name": "BaseBdev3", 00:11:49.682 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:49.682 "is_configured": true, 00:11:49.682 "data_offset": 2048, 00:11:49.682 "data_size": 63488 00:11:49.682 }, 00:11:49.682 { 00:11:49.682 "name": "BaseBdev4", 00:11:49.682 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:49.682 "is_configured": true, 00:11:49.682 "data_offset": 2048, 00:11:49.682 "data_size": 63488 00:11:49.682 } 00:11:49.682 ] 00:11:49.682 }' 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.682 18:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.945 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.945 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.945 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.945 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.945 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.945 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:49.945 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:49.945 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.945 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.945 [2024-12-06 18:09:02.070140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.205 "name": "Existed_Raid", 00:11:50.205 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:50.205 "strip_size_kb": 64, 00:11:50.205 "state": "configuring", 00:11:50.205 "raid_level": "concat", 00:11:50.205 "superblock": true, 00:11:50.205 "num_base_bdevs": 4, 00:11:50.205 "num_base_bdevs_discovered": 2, 00:11:50.205 "num_base_bdevs_operational": 4, 00:11:50.205 "base_bdevs_list": [ 00:11:50.205 { 00:11:50.205 "name": null, 00:11:50.205 "uuid": "3355b80b-84ae-4d65-bf6e-7ade139f185b", 00:11:50.205 "is_configured": false, 00:11:50.205 "data_offset": 0, 00:11:50.205 "data_size": 63488 00:11:50.205 }, 00:11:50.205 { 00:11:50.205 "name": null, 00:11:50.205 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:50.205 "is_configured": false, 00:11:50.205 "data_offset": 0, 00:11:50.205 "data_size": 63488 00:11:50.205 }, 00:11:50.205 { 00:11:50.205 "name": "BaseBdev3", 00:11:50.205 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:50.205 "is_configured": true, 00:11:50.205 "data_offset": 2048, 00:11:50.205 "data_size": 63488 00:11:50.205 }, 00:11:50.205 { 00:11:50.205 "name": "BaseBdev4", 00:11:50.205 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:50.205 "is_configured": true, 00:11:50.205 "data_offset": 2048, 00:11:50.205 "data_size": 63488 00:11:50.205 } 00:11:50.205 ] 00:11:50.205 }' 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.205 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.773 [2024-12-06 18:09:02.700491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.773 "name": "Existed_Raid", 00:11:50.773 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:50.773 "strip_size_kb": 64, 00:11:50.773 "state": "configuring", 00:11:50.773 "raid_level": "concat", 00:11:50.773 "superblock": true, 00:11:50.773 "num_base_bdevs": 4, 00:11:50.773 "num_base_bdevs_discovered": 3, 00:11:50.773 "num_base_bdevs_operational": 4, 00:11:50.773 "base_bdevs_list": [ 00:11:50.773 { 00:11:50.773 "name": null, 00:11:50.773 "uuid": "3355b80b-84ae-4d65-bf6e-7ade139f185b", 00:11:50.773 "is_configured": false, 00:11:50.773 "data_offset": 0, 00:11:50.773 "data_size": 63488 00:11:50.773 }, 00:11:50.773 { 00:11:50.773 "name": "BaseBdev2", 00:11:50.773 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:50.773 "is_configured": true, 00:11:50.773 "data_offset": 2048, 00:11:50.773 "data_size": 63488 00:11:50.773 }, 00:11:50.773 { 00:11:50.773 "name": "BaseBdev3", 00:11:50.773 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:50.773 "is_configured": true, 00:11:50.773 "data_offset": 2048, 00:11:50.773 "data_size": 63488 00:11:50.773 }, 00:11:50.773 { 00:11:50.773 "name": "BaseBdev4", 00:11:50.773 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:50.773 "is_configured": true, 00:11:50.773 "data_offset": 2048, 00:11:50.773 "data_size": 63488 00:11:50.773 } 00:11:50.773 ] 00:11:50.773 }' 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.773 18:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.032 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.032 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.032 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:51.032 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.032 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3355b80b-84ae-4d65-bf6e-7ade139f185b 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.291 [2024-12-06 18:09:03.293328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:51.291 [2024-12-06 18:09:03.293606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:51.291 [2024-12-06 18:09:03.293618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:51.291 [2024-12-06 18:09:03.293887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:51.291 [2024-12-06 18:09:03.294029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:51.291 [2024-12-06 18:09:03.294049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:51.291 [2024-12-06 18:09:03.294197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.291 NewBaseBdev 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.291 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.291 [ 00:11:51.291 { 00:11:51.291 "name": "NewBaseBdev", 00:11:51.291 "aliases": [ 00:11:51.291 "3355b80b-84ae-4d65-bf6e-7ade139f185b" 00:11:51.291 ], 00:11:51.291 "product_name": "Malloc disk", 00:11:51.291 "block_size": 512, 00:11:51.291 "num_blocks": 65536, 00:11:51.291 "uuid": "3355b80b-84ae-4d65-bf6e-7ade139f185b", 00:11:51.291 "assigned_rate_limits": { 00:11:51.291 "rw_ios_per_sec": 0, 00:11:51.291 "rw_mbytes_per_sec": 0, 00:11:51.291 "r_mbytes_per_sec": 0, 00:11:51.291 "w_mbytes_per_sec": 0 00:11:51.291 }, 00:11:51.291 "claimed": true, 00:11:51.291 "claim_type": "exclusive_write", 00:11:51.291 "zoned": false, 00:11:51.291 "supported_io_types": { 00:11:51.291 "read": true, 00:11:51.291 "write": true, 00:11:51.291 "unmap": true, 00:11:51.291 "flush": true, 00:11:51.291 "reset": true, 00:11:51.291 "nvme_admin": false, 00:11:51.291 "nvme_io": false, 00:11:51.291 "nvme_io_md": false, 00:11:51.291 "write_zeroes": true, 00:11:51.291 "zcopy": true, 00:11:51.291 "get_zone_info": false, 00:11:51.291 "zone_management": false, 00:11:51.291 "zone_append": false, 00:11:51.291 "compare": false, 00:11:51.291 "compare_and_write": false, 00:11:51.291 "abort": true, 00:11:51.291 "seek_hole": false, 00:11:51.291 "seek_data": false, 00:11:51.291 "copy": true, 00:11:51.291 "nvme_iov_md": false 00:11:51.291 }, 00:11:51.291 "memory_domains": [ 00:11:51.291 { 00:11:51.291 "dma_device_id": "system", 00:11:51.291 "dma_device_type": 1 00:11:51.291 }, 00:11:51.291 { 00:11:51.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.292 "dma_device_type": 2 00:11:51.292 } 00:11:51.292 ], 00:11:51.292 "driver_specific": {} 00:11:51.292 } 00:11:51.292 ] 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.292 "name": "Existed_Raid", 00:11:51.292 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:51.292 "strip_size_kb": 64, 00:11:51.292 "state": "online", 00:11:51.292 "raid_level": "concat", 00:11:51.292 "superblock": true, 00:11:51.292 "num_base_bdevs": 4, 00:11:51.292 "num_base_bdevs_discovered": 4, 00:11:51.292 "num_base_bdevs_operational": 4, 00:11:51.292 "base_bdevs_list": [ 00:11:51.292 { 00:11:51.292 "name": "NewBaseBdev", 00:11:51.292 "uuid": "3355b80b-84ae-4d65-bf6e-7ade139f185b", 00:11:51.292 "is_configured": true, 00:11:51.292 "data_offset": 2048, 00:11:51.292 "data_size": 63488 00:11:51.292 }, 00:11:51.292 { 00:11:51.292 "name": "BaseBdev2", 00:11:51.292 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:51.292 "is_configured": true, 00:11:51.292 "data_offset": 2048, 00:11:51.292 "data_size": 63488 00:11:51.292 }, 00:11:51.292 { 00:11:51.292 "name": "BaseBdev3", 00:11:51.292 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:51.292 "is_configured": true, 00:11:51.292 "data_offset": 2048, 00:11:51.292 "data_size": 63488 00:11:51.292 }, 00:11:51.292 { 00:11:51.292 "name": "BaseBdev4", 00:11:51.292 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:51.292 "is_configured": true, 00:11:51.292 "data_offset": 2048, 00:11:51.292 "data_size": 63488 00:11:51.292 } 00:11:51.292 ] 00:11:51.292 }' 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.292 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.859 [2024-12-06 18:09:03.800944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.859 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:51.859 "name": "Existed_Raid", 00:11:51.859 "aliases": [ 00:11:51.859 "b341d51e-2e2f-4574-a230-a772d8b25eeb" 00:11:51.859 ], 00:11:51.859 "product_name": "Raid Volume", 00:11:51.859 "block_size": 512, 00:11:51.859 "num_blocks": 253952, 00:11:51.859 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:51.859 "assigned_rate_limits": { 00:11:51.859 "rw_ios_per_sec": 0, 00:11:51.859 "rw_mbytes_per_sec": 0, 00:11:51.859 "r_mbytes_per_sec": 0, 00:11:51.859 "w_mbytes_per_sec": 0 00:11:51.859 }, 00:11:51.859 "claimed": false, 00:11:51.859 "zoned": false, 00:11:51.859 "supported_io_types": { 00:11:51.859 "read": true, 00:11:51.859 "write": true, 00:11:51.859 "unmap": true, 00:11:51.859 "flush": true, 00:11:51.859 "reset": true, 00:11:51.859 "nvme_admin": false, 00:11:51.859 "nvme_io": false, 00:11:51.859 "nvme_io_md": false, 00:11:51.859 "write_zeroes": true, 00:11:51.859 "zcopy": false, 00:11:51.859 "get_zone_info": false, 00:11:51.859 "zone_management": false, 00:11:51.859 "zone_append": false, 00:11:51.859 "compare": false, 00:11:51.859 "compare_and_write": false, 00:11:51.859 "abort": false, 00:11:51.859 "seek_hole": false, 00:11:51.859 "seek_data": false, 00:11:51.859 "copy": false, 00:11:51.859 "nvme_iov_md": false 00:11:51.859 }, 00:11:51.859 "memory_domains": [ 00:11:51.860 { 00:11:51.860 "dma_device_id": "system", 00:11:51.860 "dma_device_type": 1 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.860 "dma_device_type": 2 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "dma_device_id": "system", 00:11:51.860 "dma_device_type": 1 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.860 "dma_device_type": 2 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "dma_device_id": "system", 00:11:51.860 "dma_device_type": 1 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.860 "dma_device_type": 2 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "dma_device_id": "system", 00:11:51.860 "dma_device_type": 1 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.860 "dma_device_type": 2 00:11:51.860 } 00:11:51.860 ], 00:11:51.860 "driver_specific": { 00:11:51.860 "raid": { 00:11:51.860 "uuid": "b341d51e-2e2f-4574-a230-a772d8b25eeb", 00:11:51.860 "strip_size_kb": 64, 00:11:51.860 "state": "online", 00:11:51.860 "raid_level": "concat", 00:11:51.860 "superblock": true, 00:11:51.860 "num_base_bdevs": 4, 00:11:51.860 "num_base_bdevs_discovered": 4, 00:11:51.860 "num_base_bdevs_operational": 4, 00:11:51.860 "base_bdevs_list": [ 00:11:51.860 { 00:11:51.860 "name": "NewBaseBdev", 00:11:51.860 "uuid": "3355b80b-84ae-4d65-bf6e-7ade139f185b", 00:11:51.860 "is_configured": true, 00:11:51.860 "data_offset": 2048, 00:11:51.860 "data_size": 63488 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "name": "BaseBdev2", 00:11:51.860 "uuid": "a95d4259-2e88-4fef-bab5-49ea64e18a8c", 00:11:51.860 "is_configured": true, 00:11:51.860 "data_offset": 2048, 00:11:51.860 "data_size": 63488 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "name": "BaseBdev3", 00:11:51.860 "uuid": "b4f9dc69-24b4-4118-a7ba-72036184e511", 00:11:51.860 "is_configured": true, 00:11:51.860 "data_offset": 2048, 00:11:51.860 "data_size": 63488 00:11:51.860 }, 00:11:51.860 { 00:11:51.860 "name": "BaseBdev4", 00:11:51.860 "uuid": "edf50c9c-58cf-4998-bb54-92c93b10fadd", 00:11:51.860 "is_configured": true, 00:11:51.860 "data_offset": 2048, 00:11:51.860 "data_size": 63488 00:11:51.860 } 00:11:51.860 ] 00:11:51.860 } 00:11:51.860 } 00:11:51.860 }' 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:51.860 BaseBdev2 00:11:51.860 BaseBdev3 00:11:51.860 BaseBdev4' 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.860 18:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.860 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.860 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.860 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.121 [2024-12-06 18:09:04.127993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.121 [2024-12-06 18:09:04.128027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.121 [2024-12-06 18:09:04.128127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.121 [2024-12-06 18:09:04.128203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.121 [2024-12-06 18:09:04.128216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72437 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72437 ']' 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72437 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72437 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72437' 00:11:52.121 killing process with pid 72437 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72437 00:11:52.121 [2024-12-06 18:09:04.177316] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.121 18:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72437 00:11:52.688 [2024-12-06 18:09:04.594814] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.620 ************************************ 00:11:53.620 END TEST raid_state_function_test_sb 00:11:53.620 ************************************ 00:11:53.620 18:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:53.620 00:11:53.620 real 0m11.917s 00:11:53.620 user 0m18.916s 00:11:53.620 sys 0m2.180s 00:11:53.620 18:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.620 18:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.878 18:09:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:53.878 18:09:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:53.878 18:09:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.878 18:09:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.878 ************************************ 00:11:53.878 START TEST raid_superblock_test 00:11:53.878 ************************************ 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73107 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73107 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73107 ']' 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.878 18:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.878 [2024-12-06 18:09:05.944509] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:11:53.878 [2024-12-06 18:09:05.944728] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73107 ] 00:11:54.137 [2024-12-06 18:09:06.117042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.137 [2024-12-06 18:09:06.236373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.396 [2024-12-06 18:09:06.453796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.396 [2024-12-06 18:09:06.453855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.656 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.916 malloc1 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.916 [2024-12-06 18:09:06.852795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:54.916 [2024-12-06 18:09:06.852915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.916 [2024-12-06 18:09:06.852983] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:54.916 [2024-12-06 18:09:06.853058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.916 [2024-12-06 18:09:06.855543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.916 [2024-12-06 18:09:06.855623] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:54.916 pt1 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.916 malloc2 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.916 [2024-12-06 18:09:06.916226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:54.916 [2024-12-06 18:09:06.916285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.916 [2024-12-06 18:09:06.916312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:54.916 [2024-12-06 18:09:06.916324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.916 [2024-12-06 18:09:06.918520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.916 [2024-12-06 18:09:06.918625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:54.916 pt2 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.916 malloc3 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.916 [2024-12-06 18:09:06.981624] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:54.916 [2024-12-06 18:09:06.981717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.916 [2024-12-06 18:09:06.981758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:54.916 [2024-12-06 18:09:06.981809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.916 [2024-12-06 18:09:06.984047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.916 [2024-12-06 18:09:06.984137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:54.916 pt3 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.916 18:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.916 malloc4 00:11:54.916 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.916 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:54.916 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.916 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.917 [2024-12-06 18:09:07.043055] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:54.917 [2024-12-06 18:09:07.043185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.917 [2024-12-06 18:09:07.043240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:54.917 [2024-12-06 18:09:07.043276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.917 [2024-12-06 18:09:07.045590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.917 [2024-12-06 18:09:07.045658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:54.917 pt4 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.917 [2024-12-06 18:09:07.055091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:54.917 [2024-12-06 18:09:07.057247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:54.917 [2024-12-06 18:09:07.057391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:54.917 [2024-12-06 18:09:07.057474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:54.917 [2024-12-06 18:09:07.057700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:54.917 [2024-12-06 18:09:07.057750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:54.917 [2024-12-06 18:09:07.058058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:54.917 [2024-12-06 18:09:07.058298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:54.917 [2024-12-06 18:09:07.058355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:54.917 [2024-12-06 18:09:07.058590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.917 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.177 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.177 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.177 "name": "raid_bdev1", 00:11:55.177 "uuid": "4bc4dd50-8b0e-4c7d-a71f-251ad57af273", 00:11:55.177 "strip_size_kb": 64, 00:11:55.177 "state": "online", 00:11:55.177 "raid_level": "concat", 00:11:55.177 "superblock": true, 00:11:55.177 "num_base_bdevs": 4, 00:11:55.177 "num_base_bdevs_discovered": 4, 00:11:55.177 "num_base_bdevs_operational": 4, 00:11:55.177 "base_bdevs_list": [ 00:11:55.177 { 00:11:55.177 "name": "pt1", 00:11:55.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.177 "is_configured": true, 00:11:55.177 "data_offset": 2048, 00:11:55.177 "data_size": 63488 00:11:55.177 }, 00:11:55.177 { 00:11:55.177 "name": "pt2", 00:11:55.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.177 "is_configured": true, 00:11:55.177 "data_offset": 2048, 00:11:55.177 "data_size": 63488 00:11:55.177 }, 00:11:55.177 { 00:11:55.177 "name": "pt3", 00:11:55.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.177 "is_configured": true, 00:11:55.177 "data_offset": 2048, 00:11:55.177 "data_size": 63488 00:11:55.177 }, 00:11:55.177 { 00:11:55.177 "name": "pt4", 00:11:55.177 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.177 "is_configured": true, 00:11:55.177 "data_offset": 2048, 00:11:55.177 "data_size": 63488 00:11:55.177 } 00:11:55.177 ] 00:11:55.177 }' 00:11:55.177 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.177 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.438 [2024-12-06 18:09:07.522609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:55.438 "name": "raid_bdev1", 00:11:55.438 "aliases": [ 00:11:55.438 "4bc4dd50-8b0e-4c7d-a71f-251ad57af273" 00:11:55.438 ], 00:11:55.438 "product_name": "Raid Volume", 00:11:55.438 "block_size": 512, 00:11:55.438 "num_blocks": 253952, 00:11:55.438 "uuid": "4bc4dd50-8b0e-4c7d-a71f-251ad57af273", 00:11:55.438 "assigned_rate_limits": { 00:11:55.438 "rw_ios_per_sec": 0, 00:11:55.438 "rw_mbytes_per_sec": 0, 00:11:55.438 "r_mbytes_per_sec": 0, 00:11:55.438 "w_mbytes_per_sec": 0 00:11:55.438 }, 00:11:55.438 "claimed": false, 00:11:55.438 "zoned": false, 00:11:55.438 "supported_io_types": { 00:11:55.438 "read": true, 00:11:55.438 "write": true, 00:11:55.438 "unmap": true, 00:11:55.438 "flush": true, 00:11:55.438 "reset": true, 00:11:55.438 "nvme_admin": false, 00:11:55.438 "nvme_io": false, 00:11:55.438 "nvme_io_md": false, 00:11:55.438 "write_zeroes": true, 00:11:55.438 "zcopy": false, 00:11:55.438 "get_zone_info": false, 00:11:55.438 "zone_management": false, 00:11:55.438 "zone_append": false, 00:11:55.438 "compare": false, 00:11:55.438 "compare_and_write": false, 00:11:55.438 "abort": false, 00:11:55.438 "seek_hole": false, 00:11:55.438 "seek_data": false, 00:11:55.438 "copy": false, 00:11:55.438 "nvme_iov_md": false 00:11:55.438 }, 00:11:55.438 "memory_domains": [ 00:11:55.438 { 00:11:55.438 "dma_device_id": "system", 00:11:55.438 "dma_device_type": 1 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.438 "dma_device_type": 2 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "dma_device_id": "system", 00:11:55.438 "dma_device_type": 1 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.438 "dma_device_type": 2 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "dma_device_id": "system", 00:11:55.438 "dma_device_type": 1 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.438 "dma_device_type": 2 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "dma_device_id": "system", 00:11:55.438 "dma_device_type": 1 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.438 "dma_device_type": 2 00:11:55.438 } 00:11:55.438 ], 00:11:55.438 "driver_specific": { 00:11:55.438 "raid": { 00:11:55.438 "uuid": "4bc4dd50-8b0e-4c7d-a71f-251ad57af273", 00:11:55.438 "strip_size_kb": 64, 00:11:55.438 "state": "online", 00:11:55.438 "raid_level": "concat", 00:11:55.438 "superblock": true, 00:11:55.438 "num_base_bdevs": 4, 00:11:55.438 "num_base_bdevs_discovered": 4, 00:11:55.438 "num_base_bdevs_operational": 4, 00:11:55.438 "base_bdevs_list": [ 00:11:55.438 { 00:11:55.438 "name": "pt1", 00:11:55.438 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.438 "is_configured": true, 00:11:55.438 "data_offset": 2048, 00:11:55.438 "data_size": 63488 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "name": "pt2", 00:11:55.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.438 "is_configured": true, 00:11:55.438 "data_offset": 2048, 00:11:55.438 "data_size": 63488 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "name": "pt3", 00:11:55.438 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.438 "is_configured": true, 00:11:55.438 "data_offset": 2048, 00:11:55.438 "data_size": 63488 00:11:55.438 }, 00:11:55.438 { 00:11:55.438 "name": "pt4", 00:11:55.438 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.438 "is_configured": true, 00:11:55.438 "data_offset": 2048, 00:11:55.438 "data_size": 63488 00:11:55.438 } 00:11:55.438 ] 00:11:55.438 } 00:11:55.438 } 00:11:55.438 }' 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:55.438 pt2 00:11:55.438 pt3 00:11:55.438 pt4' 00:11:55.438 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.698 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:55.698 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.698 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:55.698 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.698 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.698 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.698 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.699 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.699 [2024-12-06 18:09:07.858037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4bc4dd50-8b0e-4c7d-a71f-251ad57af273 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4bc4dd50-8b0e-4c7d-a71f-251ad57af273 ']' 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.958 [2024-12-06 18:09:07.901580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.958 [2024-12-06 18:09:07.901607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.958 [2024-12-06 18:09:07.901696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.958 [2024-12-06 18:09:07.901772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.958 [2024-12-06 18:09:07.901787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.958 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.959 18:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.959 [2024-12-06 18:09:08.065355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:55.959 [2024-12-06 18:09:08.067439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:55.959 [2024-12-06 18:09:08.067493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:55.959 [2024-12-06 18:09:08.067529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:55.959 [2024-12-06 18:09:08.067584] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:55.959 [2024-12-06 18:09:08.067641] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:55.959 [2024-12-06 18:09:08.067662] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:55.959 [2024-12-06 18:09:08.067683] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:55.959 [2024-12-06 18:09:08.067699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.959 [2024-12-06 18:09:08.067722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:55.959 request: 00:11:55.959 { 00:11:55.959 "name": "raid_bdev1", 00:11:55.959 "raid_level": "concat", 00:11:55.959 "base_bdevs": [ 00:11:55.959 "malloc1", 00:11:55.959 "malloc2", 00:11:55.959 "malloc3", 00:11:55.959 "malloc4" 00:11:55.959 ], 00:11:55.959 "strip_size_kb": 64, 00:11:55.959 "superblock": false, 00:11:55.959 "method": "bdev_raid_create", 00:11:55.959 "req_id": 1 00:11:55.959 } 00:11:55.959 Got JSON-RPC error response 00:11:55.959 response: 00:11:55.959 { 00:11:55.959 "code": -17, 00:11:55.959 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:55.959 } 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.959 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.219 [2024-12-06 18:09:08.129209] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:56.219 [2024-12-06 18:09:08.129316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.219 [2024-12-06 18:09:08.129361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:56.219 [2024-12-06 18:09:08.129400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.219 [2024-12-06 18:09:08.131910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.219 [2024-12-06 18:09:08.132013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:56.219 [2024-12-06 18:09:08.132164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:56.219 [2024-12-06 18:09:08.132276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:56.219 pt1 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.219 "name": "raid_bdev1", 00:11:56.219 "uuid": "4bc4dd50-8b0e-4c7d-a71f-251ad57af273", 00:11:56.219 "strip_size_kb": 64, 00:11:56.219 "state": "configuring", 00:11:56.219 "raid_level": "concat", 00:11:56.219 "superblock": true, 00:11:56.219 "num_base_bdevs": 4, 00:11:56.219 "num_base_bdevs_discovered": 1, 00:11:56.219 "num_base_bdevs_operational": 4, 00:11:56.219 "base_bdevs_list": [ 00:11:56.219 { 00:11:56.219 "name": "pt1", 00:11:56.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.219 "is_configured": true, 00:11:56.219 "data_offset": 2048, 00:11:56.219 "data_size": 63488 00:11:56.219 }, 00:11:56.219 { 00:11:56.219 "name": null, 00:11:56.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.219 "is_configured": false, 00:11:56.219 "data_offset": 2048, 00:11:56.219 "data_size": 63488 00:11:56.219 }, 00:11:56.219 { 00:11:56.219 "name": null, 00:11:56.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.219 "is_configured": false, 00:11:56.219 "data_offset": 2048, 00:11:56.219 "data_size": 63488 00:11:56.219 }, 00:11:56.219 { 00:11:56.219 "name": null, 00:11:56.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.219 "is_configured": false, 00:11:56.219 "data_offset": 2048, 00:11:56.219 "data_size": 63488 00:11:56.219 } 00:11:56.219 ] 00:11:56.219 }' 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.219 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.479 [2024-12-06 18:09:08.624389] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:56.479 [2024-12-06 18:09:08.624471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.479 [2024-12-06 18:09:08.624494] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:56.479 [2024-12-06 18:09:08.624507] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.479 [2024-12-06 18:09:08.625029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.479 [2024-12-06 18:09:08.625070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:56.479 [2024-12-06 18:09:08.625168] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:56.479 [2024-12-06 18:09:08.625198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.479 pt2 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.479 [2024-12-06 18:09:08.636390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.479 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.737 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.737 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.737 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.737 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.737 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.738 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.738 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.738 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.738 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.738 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.738 "name": "raid_bdev1", 00:11:56.738 "uuid": "4bc4dd50-8b0e-4c7d-a71f-251ad57af273", 00:11:56.738 "strip_size_kb": 64, 00:11:56.738 "state": "configuring", 00:11:56.738 "raid_level": "concat", 00:11:56.738 "superblock": true, 00:11:56.738 "num_base_bdevs": 4, 00:11:56.738 "num_base_bdevs_discovered": 1, 00:11:56.738 "num_base_bdevs_operational": 4, 00:11:56.738 "base_bdevs_list": [ 00:11:56.738 { 00:11:56.738 "name": "pt1", 00:11:56.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.738 "is_configured": true, 00:11:56.738 "data_offset": 2048, 00:11:56.738 "data_size": 63488 00:11:56.738 }, 00:11:56.738 { 00:11:56.738 "name": null, 00:11:56.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.738 "is_configured": false, 00:11:56.738 "data_offset": 0, 00:11:56.738 "data_size": 63488 00:11:56.738 }, 00:11:56.738 { 00:11:56.738 "name": null, 00:11:56.738 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.738 "is_configured": false, 00:11:56.738 "data_offset": 2048, 00:11:56.738 "data_size": 63488 00:11:56.738 }, 00:11:56.738 { 00:11:56.738 "name": null, 00:11:56.738 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.738 "is_configured": false, 00:11:56.738 "data_offset": 2048, 00:11:56.738 "data_size": 63488 00:11:56.738 } 00:11:56.738 ] 00:11:56.738 }' 00:11:56.738 18:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.738 18:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.996 [2024-12-06 18:09:09.115593] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:56.996 [2024-12-06 18:09:09.115764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.996 [2024-12-06 18:09:09.115812] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:56.996 [2024-12-06 18:09:09.115826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.996 [2024-12-06 18:09:09.116372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.996 [2024-12-06 18:09:09.116397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:56.996 [2024-12-06 18:09:09.116510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:56.996 [2024-12-06 18:09:09.116537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.996 pt2 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.996 [2024-12-06 18:09:09.127562] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:56.996 [2024-12-06 18:09:09.127638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.996 [2024-12-06 18:09:09.127663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:56.996 [2024-12-06 18:09:09.127674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.996 [2024-12-06 18:09:09.128218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.996 [2024-12-06 18:09:09.128245] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:56.996 [2024-12-06 18:09:09.128345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:56.996 [2024-12-06 18:09:09.128388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:56.996 pt3 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.996 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.996 [2024-12-06 18:09:09.139470] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:56.996 [2024-12-06 18:09:09.139519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.996 [2024-12-06 18:09:09.139555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:56.996 [2024-12-06 18:09:09.139564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.996 [2024-12-06 18:09:09.139996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.996 [2024-12-06 18:09:09.140013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:56.996 [2024-12-06 18:09:09.140108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:56.996 [2024-12-06 18:09:09.140135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:56.996 [2024-12-06 18:09:09.140296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:56.997 [2024-12-06 18:09:09.140306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:56.997 [2024-12-06 18:09:09.140575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:56.997 [2024-12-06 18:09:09.140758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:56.997 [2024-12-06 18:09:09.140773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:56.997 [2024-12-06 18:09:09.140936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.997 pt4 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.997 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.256 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.256 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.256 "name": "raid_bdev1", 00:11:57.256 "uuid": "4bc4dd50-8b0e-4c7d-a71f-251ad57af273", 00:11:57.256 "strip_size_kb": 64, 00:11:57.256 "state": "online", 00:11:57.256 "raid_level": "concat", 00:11:57.256 "superblock": true, 00:11:57.256 "num_base_bdevs": 4, 00:11:57.256 "num_base_bdevs_discovered": 4, 00:11:57.256 "num_base_bdevs_operational": 4, 00:11:57.256 "base_bdevs_list": [ 00:11:57.256 { 00:11:57.256 "name": "pt1", 00:11:57.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.256 "is_configured": true, 00:11:57.256 "data_offset": 2048, 00:11:57.256 "data_size": 63488 00:11:57.256 }, 00:11:57.256 { 00:11:57.256 "name": "pt2", 00:11:57.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.256 "is_configured": true, 00:11:57.256 "data_offset": 2048, 00:11:57.256 "data_size": 63488 00:11:57.256 }, 00:11:57.256 { 00:11:57.256 "name": "pt3", 00:11:57.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.256 "is_configured": true, 00:11:57.256 "data_offset": 2048, 00:11:57.256 "data_size": 63488 00:11:57.256 }, 00:11:57.256 { 00:11:57.256 "name": "pt4", 00:11:57.256 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.256 "is_configured": true, 00:11:57.256 "data_offset": 2048, 00:11:57.256 "data_size": 63488 00:11:57.256 } 00:11:57.256 ] 00:11:57.256 }' 00:11:57.256 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.256 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 [2024-12-06 18:09:09.650995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.515 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.779 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.779 "name": "raid_bdev1", 00:11:57.779 "aliases": [ 00:11:57.779 "4bc4dd50-8b0e-4c7d-a71f-251ad57af273" 00:11:57.779 ], 00:11:57.779 "product_name": "Raid Volume", 00:11:57.779 "block_size": 512, 00:11:57.779 "num_blocks": 253952, 00:11:57.779 "uuid": "4bc4dd50-8b0e-4c7d-a71f-251ad57af273", 00:11:57.779 "assigned_rate_limits": { 00:11:57.779 "rw_ios_per_sec": 0, 00:11:57.779 "rw_mbytes_per_sec": 0, 00:11:57.779 "r_mbytes_per_sec": 0, 00:11:57.779 "w_mbytes_per_sec": 0 00:11:57.779 }, 00:11:57.779 "claimed": false, 00:11:57.779 "zoned": false, 00:11:57.779 "supported_io_types": { 00:11:57.779 "read": true, 00:11:57.779 "write": true, 00:11:57.779 "unmap": true, 00:11:57.779 "flush": true, 00:11:57.779 "reset": true, 00:11:57.779 "nvme_admin": false, 00:11:57.779 "nvme_io": false, 00:11:57.779 "nvme_io_md": false, 00:11:57.779 "write_zeroes": true, 00:11:57.779 "zcopy": false, 00:11:57.779 "get_zone_info": false, 00:11:57.779 "zone_management": false, 00:11:57.779 "zone_append": false, 00:11:57.779 "compare": false, 00:11:57.779 "compare_and_write": false, 00:11:57.779 "abort": false, 00:11:57.779 "seek_hole": false, 00:11:57.779 "seek_data": false, 00:11:57.779 "copy": false, 00:11:57.779 "nvme_iov_md": false 00:11:57.779 }, 00:11:57.779 "memory_domains": [ 00:11:57.779 { 00:11:57.779 "dma_device_id": "system", 00:11:57.779 "dma_device_type": 1 00:11:57.779 }, 00:11:57.779 { 00:11:57.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.779 "dma_device_type": 2 00:11:57.779 }, 00:11:57.779 { 00:11:57.779 "dma_device_id": "system", 00:11:57.779 "dma_device_type": 1 00:11:57.779 }, 00:11:57.779 { 00:11:57.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.779 "dma_device_type": 2 00:11:57.779 }, 00:11:57.779 { 00:11:57.779 "dma_device_id": "system", 00:11:57.779 "dma_device_type": 1 00:11:57.779 }, 00:11:57.779 { 00:11:57.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.779 "dma_device_type": 2 00:11:57.779 }, 00:11:57.779 { 00:11:57.779 "dma_device_id": "system", 00:11:57.779 "dma_device_type": 1 00:11:57.779 }, 00:11:57.779 { 00:11:57.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.779 "dma_device_type": 2 00:11:57.779 } 00:11:57.779 ], 00:11:57.779 "driver_specific": { 00:11:57.779 "raid": { 00:11:57.779 "uuid": "4bc4dd50-8b0e-4c7d-a71f-251ad57af273", 00:11:57.779 "strip_size_kb": 64, 00:11:57.779 "state": "online", 00:11:57.779 "raid_level": "concat", 00:11:57.779 "superblock": true, 00:11:57.779 "num_base_bdevs": 4, 00:11:57.779 "num_base_bdevs_discovered": 4, 00:11:57.779 "num_base_bdevs_operational": 4, 00:11:57.779 "base_bdevs_list": [ 00:11:57.779 { 00:11:57.779 "name": "pt1", 00:11:57.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.779 "is_configured": true, 00:11:57.779 "data_offset": 2048, 00:11:57.779 "data_size": 63488 00:11:57.779 }, 00:11:57.779 { 00:11:57.779 "name": "pt2", 00:11:57.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.779 "is_configured": true, 00:11:57.779 "data_offset": 2048, 00:11:57.779 "data_size": 63488 00:11:57.779 }, 00:11:57.779 { 00:11:57.779 "name": "pt3", 00:11:57.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.780 "is_configured": true, 00:11:57.780 "data_offset": 2048, 00:11:57.780 "data_size": 63488 00:11:57.780 }, 00:11:57.780 { 00:11:57.780 "name": "pt4", 00:11:57.780 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.780 "is_configured": true, 00:11:57.780 "data_offset": 2048, 00:11:57.780 "data_size": 63488 00:11:57.780 } 00:11:57.780 ] 00:11:57.780 } 00:11:57.780 } 00:11:57.780 }' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:57.780 pt2 00:11:57.780 pt3 00:11:57.780 pt4' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.780 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.038 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.038 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.038 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.038 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.038 18:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:58.038 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.038 18:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.038 [2024-12-06 18:09:09.994405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.038 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.038 18:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4bc4dd50-8b0e-4c7d-a71f-251ad57af273 '!=' 4bc4dd50-8b0e-4c7d-a71f-251ad57af273 ']' 00:11:58.038 18:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73107 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73107 ']' 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73107 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73107 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73107' 00:11:58.039 killing process with pid 73107 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73107 00:11:58.039 [2024-12-06 18:09:10.080992] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.039 [2024-12-06 18:09:10.081113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.039 [2024-12-06 18:09:10.081199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.039 [2024-12-06 18:09:10.081209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:58.039 18:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73107 00:11:58.608 [2024-12-06 18:09:10.502961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.548 18:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:59.548 00:11:59.548 real 0m5.835s 00:11:59.548 user 0m8.401s 00:11:59.548 sys 0m1.028s 00:11:59.548 18:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.548 ************************************ 00:11:59.548 END TEST raid_superblock_test 00:11:59.548 ************************************ 00:11:59.548 18:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.809 18:09:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:59.809 18:09:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:59.809 18:09:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.809 18:09:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.809 ************************************ 00:11:59.809 START TEST raid_read_error_test 00:11:59.809 ************************************ 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LVTpbVhFII 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73372 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73372 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73372 ']' 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.809 18:09:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.809 [2024-12-06 18:09:11.866747] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:11:59.809 [2024-12-06 18:09:11.866947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73372 ] 00:12:00.070 [2024-12-06 18:09:12.044079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.070 [2024-12-06 18:09:12.165275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.329 [2024-12-06 18:09:12.390334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.329 [2024-12-06 18:09:12.390400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.900 BaseBdev1_malloc 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.900 true 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.900 [2024-12-06 18:09:12.826072] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:00.900 [2024-12-06 18:09:12.826137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.900 [2024-12-06 18:09:12.826161] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:00.900 [2024-12-06 18:09:12.826173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.900 [2024-12-06 18:09:12.828563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.900 [2024-12-06 18:09:12.828605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:00.900 BaseBdev1 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.900 BaseBdev2_malloc 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.900 true 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.900 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 [2024-12-06 18:09:12.895765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:00.901 [2024-12-06 18:09:12.895828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.901 [2024-12-06 18:09:12.895848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:00.901 [2024-12-06 18:09:12.895859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.901 [2024-12-06 18:09:12.898118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.901 [2024-12-06 18:09:12.898157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:00.901 BaseBdev2 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 BaseBdev3_malloc 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 true 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 [2024-12-06 18:09:12.975638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:00.901 [2024-12-06 18:09:12.975698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.901 [2024-12-06 18:09:12.975720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:00.901 [2024-12-06 18:09:12.975732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.901 [2024-12-06 18:09:12.978092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.901 [2024-12-06 18:09:12.978130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:00.901 BaseBdev3 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.901 18:09:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 BaseBdev4_malloc 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 true 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 [2024-12-06 18:09:13.044018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:00.901 [2024-12-06 18:09:13.044159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.901 [2024-12-06 18:09:13.044190] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:00.901 [2024-12-06 18:09:13.044208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.901 [2024-12-06 18:09:13.046419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.901 [2024-12-06 18:09:13.046460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:00.901 BaseBdev4 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.901 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 [2024-12-06 18:09:13.056072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.901 [2024-12-06 18:09:13.058001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.901 [2024-12-06 18:09:13.058096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.901 [2024-12-06 18:09:13.058166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:00.901 [2024-12-06 18:09:13.058417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:00.901 [2024-12-06 18:09:13.058436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:00.901 [2024-12-06 18:09:13.058709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:00.901 [2024-12-06 18:09:13.058893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:00.901 [2024-12-06 18:09:13.058905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:00.901 [2024-12-06 18:09:13.059084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.161 "name": "raid_bdev1", 00:12:01.161 "uuid": "3c619877-20a6-4b3d-82e1-f10c49ce0339", 00:12:01.161 "strip_size_kb": 64, 00:12:01.161 "state": "online", 00:12:01.161 "raid_level": "concat", 00:12:01.161 "superblock": true, 00:12:01.161 "num_base_bdevs": 4, 00:12:01.161 "num_base_bdevs_discovered": 4, 00:12:01.161 "num_base_bdevs_operational": 4, 00:12:01.161 "base_bdevs_list": [ 00:12:01.161 { 00:12:01.161 "name": "BaseBdev1", 00:12:01.161 "uuid": "307b8711-1039-5296-a6c5-e5dfdcea09fc", 00:12:01.161 "is_configured": true, 00:12:01.161 "data_offset": 2048, 00:12:01.161 "data_size": 63488 00:12:01.161 }, 00:12:01.161 { 00:12:01.161 "name": "BaseBdev2", 00:12:01.161 "uuid": "699ca9cb-407e-5f55-9260-d3ed9d1e2749", 00:12:01.161 "is_configured": true, 00:12:01.161 "data_offset": 2048, 00:12:01.161 "data_size": 63488 00:12:01.161 }, 00:12:01.161 { 00:12:01.161 "name": "BaseBdev3", 00:12:01.161 "uuid": "109a62de-7466-5744-8be3-1c94e0c211b7", 00:12:01.161 "is_configured": true, 00:12:01.161 "data_offset": 2048, 00:12:01.161 "data_size": 63488 00:12:01.161 }, 00:12:01.161 { 00:12:01.161 "name": "BaseBdev4", 00:12:01.161 "uuid": "ba99e829-0bff-5a4b-8568-bf4b4930f237", 00:12:01.161 "is_configured": true, 00:12:01.161 "data_offset": 2048, 00:12:01.161 "data_size": 63488 00:12:01.161 } 00:12:01.161 ] 00:12:01.161 }' 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.161 18:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.420 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:01.420 18:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:01.679 [2024-12-06 18:09:13.612702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:02.628 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:02.628 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.628 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.628 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.628 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.629 "name": "raid_bdev1", 00:12:02.629 "uuid": "3c619877-20a6-4b3d-82e1-f10c49ce0339", 00:12:02.629 "strip_size_kb": 64, 00:12:02.629 "state": "online", 00:12:02.629 "raid_level": "concat", 00:12:02.629 "superblock": true, 00:12:02.629 "num_base_bdevs": 4, 00:12:02.629 "num_base_bdevs_discovered": 4, 00:12:02.629 "num_base_bdevs_operational": 4, 00:12:02.629 "base_bdevs_list": [ 00:12:02.629 { 00:12:02.629 "name": "BaseBdev1", 00:12:02.629 "uuid": "307b8711-1039-5296-a6c5-e5dfdcea09fc", 00:12:02.629 "is_configured": true, 00:12:02.629 "data_offset": 2048, 00:12:02.629 "data_size": 63488 00:12:02.629 }, 00:12:02.629 { 00:12:02.629 "name": "BaseBdev2", 00:12:02.629 "uuid": "699ca9cb-407e-5f55-9260-d3ed9d1e2749", 00:12:02.629 "is_configured": true, 00:12:02.629 "data_offset": 2048, 00:12:02.629 "data_size": 63488 00:12:02.629 }, 00:12:02.629 { 00:12:02.629 "name": "BaseBdev3", 00:12:02.629 "uuid": "109a62de-7466-5744-8be3-1c94e0c211b7", 00:12:02.629 "is_configured": true, 00:12:02.629 "data_offset": 2048, 00:12:02.629 "data_size": 63488 00:12:02.629 }, 00:12:02.629 { 00:12:02.629 "name": "BaseBdev4", 00:12:02.629 "uuid": "ba99e829-0bff-5a4b-8568-bf4b4930f237", 00:12:02.629 "is_configured": true, 00:12:02.629 "data_offset": 2048, 00:12:02.629 "data_size": 63488 00:12:02.629 } 00:12:02.629 ] 00:12:02.629 }' 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.629 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.888 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.888 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.888 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.888 [2024-12-06 18:09:14.993715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.888 [2024-12-06 18:09:14.993842] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.888 [2024-12-06 18:09:14.997148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.888 [2024-12-06 18:09:14.997250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.888 [2024-12-06 18:09:14.997374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.888 [2024-12-06 18:09:14.997446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:02.888 { 00:12:02.888 "results": [ 00:12:02.888 { 00:12:02.888 "job": "raid_bdev1", 00:12:02.888 "core_mask": "0x1", 00:12:02.888 "workload": "randrw", 00:12:02.888 "percentage": 50, 00:12:02.888 "status": "finished", 00:12:02.888 "queue_depth": 1, 00:12:02.888 "io_size": 131072, 00:12:02.888 "runtime": 1.381821, 00:12:02.888 "iops": 14027.142444643698, 00:12:02.888 "mibps": 1753.3928055804622, 00:12:02.888 "io_failed": 1, 00:12:02.888 "io_timeout": 0, 00:12:02.888 "avg_latency_us": 98.60688921849741, 00:12:02.888 "min_latency_us": 28.618340611353712, 00:12:02.888 "max_latency_us": 1581.1633187772925 00:12:02.888 } 00:12:02.888 ], 00:12:02.888 "core_count": 1 00:12:02.888 } 00:12:02.888 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.888 18:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73372 00:12:02.888 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73372 ']' 00:12:02.888 18:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73372 00:12:02.888 18:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:02.888 18:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.888 18:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73372 00:12:02.888 18:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.888 18:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.888 18:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73372' 00:12:02.888 killing process with pid 73372 00:12:02.888 18:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73372 00:12:02.888 [2024-12-06 18:09:15.044051] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.888 18:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73372 00:12:03.457 [2024-12-06 18:09:15.400800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LVTpbVhFII 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:04.832 00:12:04.832 real 0m4.939s 00:12:04.832 user 0m5.840s 00:12:04.832 sys 0m0.622s 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.832 18:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.832 ************************************ 00:12:04.832 END TEST raid_read_error_test 00:12:04.832 ************************************ 00:12:04.832 18:09:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:04.832 18:09:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:04.832 18:09:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.832 18:09:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.832 ************************************ 00:12:04.832 START TEST raid_write_error_test 00:12:04.832 ************************************ 00:12:04.832 18:09:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:12:04.832 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:04.832 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:04.832 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:04.832 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DYLDcFk3Xc 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73527 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73527 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73527 ']' 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.833 18:09:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.833 [2024-12-06 18:09:16.874193] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:12:04.833 [2024-12-06 18:09:16.874310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73527 ] 00:12:05.114 [2024-12-06 18:09:17.050609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.114 [2024-12-06 18:09:17.173081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.372 [2024-12-06 18:09:17.390378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.372 [2024-12-06 18:09:17.390447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.630 BaseBdev1_malloc 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.630 true 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.630 [2024-12-06 18:09:17.788554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:05.630 [2024-12-06 18:09:17.788611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.630 [2024-12-06 18:09:17.788633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:05.630 [2024-12-06 18:09:17.788644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.630 [2024-12-06 18:09:17.790807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.630 [2024-12-06 18:09:17.790926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.630 BaseBdev1 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.630 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 BaseBdev2_malloc 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 true 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 [2024-12-06 18:09:17.856879] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:05.888 [2024-12-06 18:09:17.856941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.888 [2024-12-06 18:09:17.856962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:05.888 [2024-12-06 18:09:17.856974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.888 [2024-12-06 18:09:17.859365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.888 [2024-12-06 18:09:17.859407] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.888 BaseBdev2 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 BaseBdev3_malloc 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 true 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 [2024-12-06 18:09:17.941869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:05.888 [2024-12-06 18:09:17.942016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.888 [2024-12-06 18:09:17.942044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:05.888 [2024-12-06 18:09:17.942056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.888 [2024-12-06 18:09:17.944419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.888 [2024-12-06 18:09:17.944461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:05.888 BaseBdev3 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 BaseBdev4_malloc 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 true 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 [2024-12-06 18:09:18.012146] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:05.888 [2024-12-06 18:09:18.012203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.888 [2024-12-06 18:09:18.012224] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:05.888 [2024-12-06 18:09:18.012236] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.888 [2024-12-06 18:09:18.014479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.888 [2024-12-06 18:09:18.014534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:05.888 BaseBdev4 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 [2024-12-06 18:09:18.024186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.888 [2024-12-06 18:09:18.026035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.888 [2024-12-06 18:09:18.026123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.888 [2024-12-06 18:09:18.026185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.888 [2024-12-06 18:09:18.026411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:05.888 [2024-12-06 18:09:18.026428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:05.888 [2024-12-06 18:09:18.026664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:05.888 [2024-12-06 18:09:18.026824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:05.888 [2024-12-06 18:09:18.026835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:05.888 [2024-12-06 18:09:18.026990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.147 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.147 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.147 "name": "raid_bdev1", 00:12:06.147 "uuid": "0d0adebe-8937-4a6b-a0a4-3d07ae979154", 00:12:06.147 "strip_size_kb": 64, 00:12:06.147 "state": "online", 00:12:06.147 "raid_level": "concat", 00:12:06.147 "superblock": true, 00:12:06.147 "num_base_bdevs": 4, 00:12:06.147 "num_base_bdevs_discovered": 4, 00:12:06.147 "num_base_bdevs_operational": 4, 00:12:06.147 "base_bdevs_list": [ 00:12:06.147 { 00:12:06.147 "name": "BaseBdev1", 00:12:06.147 "uuid": "4f1ee20f-4331-51e2-86ea-f47be29aae97", 00:12:06.147 "is_configured": true, 00:12:06.147 "data_offset": 2048, 00:12:06.147 "data_size": 63488 00:12:06.147 }, 00:12:06.147 { 00:12:06.147 "name": "BaseBdev2", 00:12:06.147 "uuid": "8f77470f-796d-53f0-9e27-40087c311622", 00:12:06.147 "is_configured": true, 00:12:06.147 "data_offset": 2048, 00:12:06.147 "data_size": 63488 00:12:06.147 }, 00:12:06.147 { 00:12:06.147 "name": "BaseBdev3", 00:12:06.147 "uuid": "28937b59-ef71-5a16-aacb-c680c18345e8", 00:12:06.147 "is_configured": true, 00:12:06.147 "data_offset": 2048, 00:12:06.147 "data_size": 63488 00:12:06.147 }, 00:12:06.147 { 00:12:06.147 "name": "BaseBdev4", 00:12:06.147 "uuid": "5ae319ed-5bd0-56eb-b266-dfdf20d48e49", 00:12:06.147 "is_configured": true, 00:12:06.147 "data_offset": 2048, 00:12:06.147 "data_size": 63488 00:12:06.147 } 00:12:06.147 ] 00:12:06.147 }' 00:12:06.147 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.147 18:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.406 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.406 18:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:06.406 [2024-12-06 18:09:18.548849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.377 "name": "raid_bdev1", 00:12:07.377 "uuid": "0d0adebe-8937-4a6b-a0a4-3d07ae979154", 00:12:07.377 "strip_size_kb": 64, 00:12:07.377 "state": "online", 00:12:07.377 "raid_level": "concat", 00:12:07.377 "superblock": true, 00:12:07.377 "num_base_bdevs": 4, 00:12:07.377 "num_base_bdevs_discovered": 4, 00:12:07.377 "num_base_bdevs_operational": 4, 00:12:07.377 "base_bdevs_list": [ 00:12:07.377 { 00:12:07.377 "name": "BaseBdev1", 00:12:07.377 "uuid": "4f1ee20f-4331-51e2-86ea-f47be29aae97", 00:12:07.377 "is_configured": true, 00:12:07.377 "data_offset": 2048, 00:12:07.377 "data_size": 63488 00:12:07.377 }, 00:12:07.377 { 00:12:07.377 "name": "BaseBdev2", 00:12:07.377 "uuid": "8f77470f-796d-53f0-9e27-40087c311622", 00:12:07.377 "is_configured": true, 00:12:07.377 "data_offset": 2048, 00:12:07.377 "data_size": 63488 00:12:07.377 }, 00:12:07.377 { 00:12:07.377 "name": "BaseBdev3", 00:12:07.377 "uuid": "28937b59-ef71-5a16-aacb-c680c18345e8", 00:12:07.377 "is_configured": true, 00:12:07.377 "data_offset": 2048, 00:12:07.377 "data_size": 63488 00:12:07.377 }, 00:12:07.377 { 00:12:07.377 "name": "BaseBdev4", 00:12:07.377 "uuid": "5ae319ed-5bd0-56eb-b266-dfdf20d48e49", 00:12:07.377 "is_configured": true, 00:12:07.377 "data_offset": 2048, 00:12:07.377 "data_size": 63488 00:12:07.377 } 00:12:07.377 ] 00:12:07.377 }' 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.377 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.944 [2024-12-06 18:09:19.937688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.944 [2024-12-06 18:09:19.937797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.944 [2024-12-06 18:09:19.940825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.944 [2024-12-06 18:09:19.940930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.944 [2024-12-06 18:09:19.940997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.944 [2024-12-06 18:09:19.941058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:07.944 { 00:12:07.944 "results": [ 00:12:07.944 { 00:12:07.944 "job": "raid_bdev1", 00:12:07.944 "core_mask": "0x1", 00:12:07.944 "workload": "randrw", 00:12:07.944 "percentage": 50, 00:12:07.944 "status": "finished", 00:12:07.944 "queue_depth": 1, 00:12:07.944 "io_size": 131072, 00:12:07.944 "runtime": 1.389714, 00:12:07.944 "iops": 14233.144373590538, 00:12:07.944 "mibps": 1779.1430466988172, 00:12:07.944 "io_failed": 1, 00:12:07.944 "io_timeout": 0, 00:12:07.944 "avg_latency_us": 97.22473663029386, 00:12:07.944 "min_latency_us": 28.05938864628821, 00:12:07.944 "max_latency_us": 1616.9362445414847 00:12:07.944 } 00:12:07.944 ], 00:12:07.944 "core_count": 1 00:12:07.944 } 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73527 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73527 ']' 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73527 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73527 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73527' 00:12:07.944 killing process with pid 73527 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73527 00:12:07.944 [2024-12-06 18:09:19.987327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.944 18:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73527 00:12:08.202 [2024-12-06 18:09:20.348467] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DYLDcFk3Xc 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:09.624 00:12:09.624 real 0m4.888s 00:12:09.624 user 0m5.711s 00:12:09.624 sys 0m0.605s 00:12:09.624 ************************************ 00:12:09.624 END TEST raid_write_error_test 00:12:09.624 ************************************ 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.624 18:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.624 18:09:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:09.624 18:09:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:09.624 18:09:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:09.625 18:09:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.625 18:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.625 ************************************ 00:12:09.625 START TEST raid_state_function_test 00:12:09.625 ************************************ 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73671 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:09.625 Process raid pid: 73671 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73671' 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73671 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73671 ']' 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.625 18:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.883 [2024-12-06 18:09:21.822134] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:12:09.883 [2024-12-06 18:09:21.822263] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.883 [2024-12-06 18:09:22.000624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.141 [2024-12-06 18:09:22.120277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.400 [2024-12-06 18:09:22.342368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.400 [2024-12-06 18:09:22.342468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.659 [2024-12-06 18:09:22.696635] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.659 [2024-12-06 18:09:22.696697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.659 [2024-12-06 18:09:22.696709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.659 [2024-12-06 18:09:22.696719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.659 [2024-12-06 18:09:22.696726] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:10.659 [2024-12-06 18:09:22.696735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:10.659 [2024-12-06 18:09:22.696747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:10.659 [2024-12-06 18:09:22.696756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.659 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.659 "name": "Existed_Raid", 00:12:10.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.659 "strip_size_kb": 0, 00:12:10.659 "state": "configuring", 00:12:10.659 "raid_level": "raid1", 00:12:10.659 "superblock": false, 00:12:10.659 "num_base_bdevs": 4, 00:12:10.659 "num_base_bdevs_discovered": 0, 00:12:10.659 "num_base_bdevs_operational": 4, 00:12:10.659 "base_bdevs_list": [ 00:12:10.659 { 00:12:10.659 "name": "BaseBdev1", 00:12:10.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.659 "is_configured": false, 00:12:10.659 "data_offset": 0, 00:12:10.659 "data_size": 0 00:12:10.659 }, 00:12:10.659 { 00:12:10.659 "name": "BaseBdev2", 00:12:10.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.659 "is_configured": false, 00:12:10.659 "data_offset": 0, 00:12:10.659 "data_size": 0 00:12:10.659 }, 00:12:10.659 { 00:12:10.659 "name": "BaseBdev3", 00:12:10.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.660 "is_configured": false, 00:12:10.660 "data_offset": 0, 00:12:10.660 "data_size": 0 00:12:10.660 }, 00:12:10.660 { 00:12:10.660 "name": "BaseBdev4", 00:12:10.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.660 "is_configured": false, 00:12:10.660 "data_offset": 0, 00:12:10.660 "data_size": 0 00:12:10.660 } 00:12:10.660 ] 00:12:10.660 }' 00:12:10.660 18:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.660 18:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 [2024-12-06 18:09:23.175794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.227 [2024-12-06 18:09:23.175888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 [2024-12-06 18:09:23.187744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:11.227 [2024-12-06 18:09:23.187840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:11.227 [2024-12-06 18:09:23.187872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:11.227 [2024-12-06 18:09:23.187898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:11.227 [2024-12-06 18:09:23.187919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:11.227 [2024-12-06 18:09:23.187944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:11.227 [2024-12-06 18:09:23.187964] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:11.227 [2024-12-06 18:09:23.188001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 [2024-12-06 18:09:23.237986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.227 BaseBdev1 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 [ 00:12:11.227 { 00:12:11.227 "name": "BaseBdev1", 00:12:11.227 "aliases": [ 00:12:11.227 "03ec3569-9af2-4587-ac05-19a75b67f504" 00:12:11.227 ], 00:12:11.227 "product_name": "Malloc disk", 00:12:11.227 "block_size": 512, 00:12:11.227 "num_blocks": 65536, 00:12:11.227 "uuid": "03ec3569-9af2-4587-ac05-19a75b67f504", 00:12:11.227 "assigned_rate_limits": { 00:12:11.227 "rw_ios_per_sec": 0, 00:12:11.227 "rw_mbytes_per_sec": 0, 00:12:11.227 "r_mbytes_per_sec": 0, 00:12:11.227 "w_mbytes_per_sec": 0 00:12:11.227 }, 00:12:11.227 "claimed": true, 00:12:11.227 "claim_type": "exclusive_write", 00:12:11.227 "zoned": false, 00:12:11.227 "supported_io_types": { 00:12:11.227 "read": true, 00:12:11.227 "write": true, 00:12:11.227 "unmap": true, 00:12:11.227 "flush": true, 00:12:11.227 "reset": true, 00:12:11.227 "nvme_admin": false, 00:12:11.227 "nvme_io": false, 00:12:11.227 "nvme_io_md": false, 00:12:11.227 "write_zeroes": true, 00:12:11.227 "zcopy": true, 00:12:11.227 "get_zone_info": false, 00:12:11.227 "zone_management": false, 00:12:11.227 "zone_append": false, 00:12:11.227 "compare": false, 00:12:11.227 "compare_and_write": false, 00:12:11.227 "abort": true, 00:12:11.227 "seek_hole": false, 00:12:11.227 "seek_data": false, 00:12:11.227 "copy": true, 00:12:11.227 "nvme_iov_md": false 00:12:11.227 }, 00:12:11.227 "memory_domains": [ 00:12:11.227 { 00:12:11.227 "dma_device_id": "system", 00:12:11.227 "dma_device_type": 1 00:12:11.227 }, 00:12:11.227 { 00:12:11.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.227 "dma_device_type": 2 00:12:11.227 } 00:12:11.227 ], 00:12:11.227 "driver_specific": {} 00:12:11.227 } 00:12:11.227 ] 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.227 "name": "Existed_Raid", 00:12:11.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.227 "strip_size_kb": 0, 00:12:11.227 "state": "configuring", 00:12:11.227 "raid_level": "raid1", 00:12:11.227 "superblock": false, 00:12:11.227 "num_base_bdevs": 4, 00:12:11.227 "num_base_bdevs_discovered": 1, 00:12:11.227 "num_base_bdevs_operational": 4, 00:12:11.227 "base_bdevs_list": [ 00:12:11.227 { 00:12:11.227 "name": "BaseBdev1", 00:12:11.227 "uuid": "03ec3569-9af2-4587-ac05-19a75b67f504", 00:12:11.227 "is_configured": true, 00:12:11.227 "data_offset": 0, 00:12:11.227 "data_size": 65536 00:12:11.227 }, 00:12:11.227 { 00:12:11.227 "name": "BaseBdev2", 00:12:11.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.227 "is_configured": false, 00:12:11.227 "data_offset": 0, 00:12:11.227 "data_size": 0 00:12:11.227 }, 00:12:11.227 { 00:12:11.227 "name": "BaseBdev3", 00:12:11.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.227 "is_configured": false, 00:12:11.227 "data_offset": 0, 00:12:11.227 "data_size": 0 00:12:11.227 }, 00:12:11.227 { 00:12:11.227 "name": "BaseBdev4", 00:12:11.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.227 "is_configured": false, 00:12:11.227 "data_offset": 0, 00:12:11.227 "data_size": 0 00:12:11.227 } 00:12:11.227 ] 00:12:11.227 }' 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.227 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.796 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:11.796 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.796 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.796 [2024-12-06 18:09:23.729229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.796 [2024-12-06 18:09:23.729288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:11.796 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.796 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:11.796 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.796 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.796 [2024-12-06 18:09:23.741242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.796 [2024-12-06 18:09:23.743023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:11.796 [2024-12-06 18:09:23.743083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:11.796 [2024-12-06 18:09:23.743094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:11.796 [2024-12-06 18:09:23.743106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:11.797 [2024-12-06 18:09:23.743113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:11.797 [2024-12-06 18:09:23.743122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.797 "name": "Existed_Raid", 00:12:11.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.797 "strip_size_kb": 0, 00:12:11.797 "state": "configuring", 00:12:11.797 "raid_level": "raid1", 00:12:11.797 "superblock": false, 00:12:11.797 "num_base_bdevs": 4, 00:12:11.797 "num_base_bdevs_discovered": 1, 00:12:11.797 "num_base_bdevs_operational": 4, 00:12:11.797 "base_bdevs_list": [ 00:12:11.797 { 00:12:11.797 "name": "BaseBdev1", 00:12:11.797 "uuid": "03ec3569-9af2-4587-ac05-19a75b67f504", 00:12:11.797 "is_configured": true, 00:12:11.797 "data_offset": 0, 00:12:11.797 "data_size": 65536 00:12:11.797 }, 00:12:11.797 { 00:12:11.797 "name": "BaseBdev2", 00:12:11.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.797 "is_configured": false, 00:12:11.797 "data_offset": 0, 00:12:11.797 "data_size": 0 00:12:11.797 }, 00:12:11.797 { 00:12:11.797 "name": "BaseBdev3", 00:12:11.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.797 "is_configured": false, 00:12:11.797 "data_offset": 0, 00:12:11.797 "data_size": 0 00:12:11.797 }, 00:12:11.797 { 00:12:11.797 "name": "BaseBdev4", 00:12:11.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.797 "is_configured": false, 00:12:11.797 "data_offset": 0, 00:12:11.797 "data_size": 0 00:12:11.797 } 00:12:11.797 ] 00:12:11.797 }' 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.797 18:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.057 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:12.057 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.057 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.316 [2024-12-06 18:09:24.255332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.316 BaseBdev2 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.316 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.316 [ 00:12:12.316 { 00:12:12.316 "name": "BaseBdev2", 00:12:12.316 "aliases": [ 00:12:12.316 "eaeecefe-27db-4c11-a9ee-a39ebecf4359" 00:12:12.316 ], 00:12:12.316 "product_name": "Malloc disk", 00:12:12.316 "block_size": 512, 00:12:12.316 "num_blocks": 65536, 00:12:12.316 "uuid": "eaeecefe-27db-4c11-a9ee-a39ebecf4359", 00:12:12.316 "assigned_rate_limits": { 00:12:12.317 "rw_ios_per_sec": 0, 00:12:12.317 "rw_mbytes_per_sec": 0, 00:12:12.317 "r_mbytes_per_sec": 0, 00:12:12.317 "w_mbytes_per_sec": 0 00:12:12.317 }, 00:12:12.317 "claimed": true, 00:12:12.317 "claim_type": "exclusive_write", 00:12:12.317 "zoned": false, 00:12:12.317 "supported_io_types": { 00:12:12.317 "read": true, 00:12:12.317 "write": true, 00:12:12.317 "unmap": true, 00:12:12.317 "flush": true, 00:12:12.317 "reset": true, 00:12:12.317 "nvme_admin": false, 00:12:12.317 "nvme_io": false, 00:12:12.317 "nvme_io_md": false, 00:12:12.317 "write_zeroes": true, 00:12:12.317 "zcopy": true, 00:12:12.317 "get_zone_info": false, 00:12:12.317 "zone_management": false, 00:12:12.317 "zone_append": false, 00:12:12.317 "compare": false, 00:12:12.317 "compare_and_write": false, 00:12:12.317 "abort": true, 00:12:12.317 "seek_hole": false, 00:12:12.317 "seek_data": false, 00:12:12.317 "copy": true, 00:12:12.317 "nvme_iov_md": false 00:12:12.317 }, 00:12:12.317 "memory_domains": [ 00:12:12.317 { 00:12:12.317 "dma_device_id": "system", 00:12:12.317 "dma_device_type": 1 00:12:12.317 }, 00:12:12.317 { 00:12:12.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.317 "dma_device_type": 2 00:12:12.317 } 00:12:12.317 ], 00:12:12.317 "driver_specific": {} 00:12:12.317 } 00:12:12.317 ] 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.317 "name": "Existed_Raid", 00:12:12.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.317 "strip_size_kb": 0, 00:12:12.317 "state": "configuring", 00:12:12.317 "raid_level": "raid1", 00:12:12.317 "superblock": false, 00:12:12.317 "num_base_bdevs": 4, 00:12:12.317 "num_base_bdevs_discovered": 2, 00:12:12.317 "num_base_bdevs_operational": 4, 00:12:12.317 "base_bdevs_list": [ 00:12:12.317 { 00:12:12.317 "name": "BaseBdev1", 00:12:12.317 "uuid": "03ec3569-9af2-4587-ac05-19a75b67f504", 00:12:12.317 "is_configured": true, 00:12:12.317 "data_offset": 0, 00:12:12.317 "data_size": 65536 00:12:12.317 }, 00:12:12.317 { 00:12:12.317 "name": "BaseBdev2", 00:12:12.317 "uuid": "eaeecefe-27db-4c11-a9ee-a39ebecf4359", 00:12:12.317 "is_configured": true, 00:12:12.317 "data_offset": 0, 00:12:12.317 "data_size": 65536 00:12:12.317 }, 00:12:12.317 { 00:12:12.317 "name": "BaseBdev3", 00:12:12.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.317 "is_configured": false, 00:12:12.317 "data_offset": 0, 00:12:12.317 "data_size": 0 00:12:12.317 }, 00:12:12.317 { 00:12:12.317 "name": "BaseBdev4", 00:12:12.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.317 "is_configured": false, 00:12:12.317 "data_offset": 0, 00:12:12.317 "data_size": 0 00:12:12.317 } 00:12:12.317 ] 00:12:12.317 }' 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.317 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.576 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:12.576 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.576 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.837 [2024-12-06 18:09:24.776838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.837 BaseBdev3 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.837 [ 00:12:12.837 { 00:12:12.837 "name": "BaseBdev3", 00:12:12.837 "aliases": [ 00:12:12.837 "50bbb92d-cddb-48a6-9708-4c7cb679ba86" 00:12:12.837 ], 00:12:12.837 "product_name": "Malloc disk", 00:12:12.837 "block_size": 512, 00:12:12.837 "num_blocks": 65536, 00:12:12.837 "uuid": "50bbb92d-cddb-48a6-9708-4c7cb679ba86", 00:12:12.837 "assigned_rate_limits": { 00:12:12.837 "rw_ios_per_sec": 0, 00:12:12.837 "rw_mbytes_per_sec": 0, 00:12:12.837 "r_mbytes_per_sec": 0, 00:12:12.837 "w_mbytes_per_sec": 0 00:12:12.837 }, 00:12:12.837 "claimed": true, 00:12:12.837 "claim_type": "exclusive_write", 00:12:12.837 "zoned": false, 00:12:12.837 "supported_io_types": { 00:12:12.837 "read": true, 00:12:12.837 "write": true, 00:12:12.837 "unmap": true, 00:12:12.837 "flush": true, 00:12:12.837 "reset": true, 00:12:12.837 "nvme_admin": false, 00:12:12.837 "nvme_io": false, 00:12:12.837 "nvme_io_md": false, 00:12:12.837 "write_zeroes": true, 00:12:12.837 "zcopy": true, 00:12:12.837 "get_zone_info": false, 00:12:12.837 "zone_management": false, 00:12:12.837 "zone_append": false, 00:12:12.837 "compare": false, 00:12:12.837 "compare_and_write": false, 00:12:12.837 "abort": true, 00:12:12.837 "seek_hole": false, 00:12:12.837 "seek_data": false, 00:12:12.837 "copy": true, 00:12:12.837 "nvme_iov_md": false 00:12:12.837 }, 00:12:12.837 "memory_domains": [ 00:12:12.837 { 00:12:12.837 "dma_device_id": "system", 00:12:12.837 "dma_device_type": 1 00:12:12.837 }, 00:12:12.837 { 00:12:12.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.837 "dma_device_type": 2 00:12:12.837 } 00:12:12.837 ], 00:12:12.837 "driver_specific": {} 00:12:12.837 } 00:12:12.837 ] 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.837 "name": "Existed_Raid", 00:12:12.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.837 "strip_size_kb": 0, 00:12:12.837 "state": "configuring", 00:12:12.837 "raid_level": "raid1", 00:12:12.837 "superblock": false, 00:12:12.837 "num_base_bdevs": 4, 00:12:12.837 "num_base_bdevs_discovered": 3, 00:12:12.837 "num_base_bdevs_operational": 4, 00:12:12.837 "base_bdevs_list": [ 00:12:12.837 { 00:12:12.837 "name": "BaseBdev1", 00:12:12.837 "uuid": "03ec3569-9af2-4587-ac05-19a75b67f504", 00:12:12.837 "is_configured": true, 00:12:12.837 "data_offset": 0, 00:12:12.837 "data_size": 65536 00:12:12.837 }, 00:12:12.837 { 00:12:12.837 "name": "BaseBdev2", 00:12:12.837 "uuid": "eaeecefe-27db-4c11-a9ee-a39ebecf4359", 00:12:12.837 "is_configured": true, 00:12:12.837 "data_offset": 0, 00:12:12.837 "data_size": 65536 00:12:12.837 }, 00:12:12.837 { 00:12:12.837 "name": "BaseBdev3", 00:12:12.837 "uuid": "50bbb92d-cddb-48a6-9708-4c7cb679ba86", 00:12:12.837 "is_configured": true, 00:12:12.837 "data_offset": 0, 00:12:12.837 "data_size": 65536 00:12:12.837 }, 00:12:12.837 { 00:12:12.837 "name": "BaseBdev4", 00:12:12.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.837 "is_configured": false, 00:12:12.837 "data_offset": 0, 00:12:12.837 "data_size": 0 00:12:12.837 } 00:12:12.837 ] 00:12:12.837 }' 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.837 18:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.418 [2024-12-06 18:09:25.323329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.418 [2024-12-06 18:09:25.323460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:13.418 [2024-12-06 18:09:25.323487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:13.418 [2024-12-06 18:09:25.323790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:13.418 [2024-12-06 18:09:25.324010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:13.418 [2024-12-06 18:09:25.324059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:13.418 [2024-12-06 18:09:25.324376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.418 BaseBdev4 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.418 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.419 [ 00:12:13.419 { 00:12:13.419 "name": "BaseBdev4", 00:12:13.419 "aliases": [ 00:12:13.419 "62e0ccb8-a453-4d0f-8877-7a830aaaaaf6" 00:12:13.419 ], 00:12:13.419 "product_name": "Malloc disk", 00:12:13.419 "block_size": 512, 00:12:13.419 "num_blocks": 65536, 00:12:13.419 "uuid": "62e0ccb8-a453-4d0f-8877-7a830aaaaaf6", 00:12:13.419 "assigned_rate_limits": { 00:12:13.419 "rw_ios_per_sec": 0, 00:12:13.419 "rw_mbytes_per_sec": 0, 00:12:13.419 "r_mbytes_per_sec": 0, 00:12:13.419 "w_mbytes_per_sec": 0 00:12:13.419 }, 00:12:13.419 "claimed": true, 00:12:13.419 "claim_type": "exclusive_write", 00:12:13.419 "zoned": false, 00:12:13.419 "supported_io_types": { 00:12:13.419 "read": true, 00:12:13.419 "write": true, 00:12:13.419 "unmap": true, 00:12:13.419 "flush": true, 00:12:13.419 "reset": true, 00:12:13.419 "nvme_admin": false, 00:12:13.419 "nvme_io": false, 00:12:13.419 "nvme_io_md": false, 00:12:13.419 "write_zeroes": true, 00:12:13.419 "zcopy": true, 00:12:13.419 "get_zone_info": false, 00:12:13.419 "zone_management": false, 00:12:13.419 "zone_append": false, 00:12:13.419 "compare": false, 00:12:13.419 "compare_and_write": false, 00:12:13.419 "abort": true, 00:12:13.419 "seek_hole": false, 00:12:13.419 "seek_data": false, 00:12:13.419 "copy": true, 00:12:13.419 "nvme_iov_md": false 00:12:13.419 }, 00:12:13.419 "memory_domains": [ 00:12:13.419 { 00:12:13.419 "dma_device_id": "system", 00:12:13.419 "dma_device_type": 1 00:12:13.419 }, 00:12:13.419 { 00:12:13.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.419 "dma_device_type": 2 00:12:13.419 } 00:12:13.419 ], 00:12:13.419 "driver_specific": {} 00:12:13.419 } 00:12:13.419 ] 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.419 "name": "Existed_Raid", 00:12:13.419 "uuid": "2fc19116-1462-4c96-b606-751736941145", 00:12:13.419 "strip_size_kb": 0, 00:12:13.419 "state": "online", 00:12:13.419 "raid_level": "raid1", 00:12:13.419 "superblock": false, 00:12:13.419 "num_base_bdevs": 4, 00:12:13.419 "num_base_bdevs_discovered": 4, 00:12:13.419 "num_base_bdevs_operational": 4, 00:12:13.419 "base_bdevs_list": [ 00:12:13.419 { 00:12:13.419 "name": "BaseBdev1", 00:12:13.419 "uuid": "03ec3569-9af2-4587-ac05-19a75b67f504", 00:12:13.419 "is_configured": true, 00:12:13.419 "data_offset": 0, 00:12:13.419 "data_size": 65536 00:12:13.419 }, 00:12:13.419 { 00:12:13.419 "name": "BaseBdev2", 00:12:13.419 "uuid": "eaeecefe-27db-4c11-a9ee-a39ebecf4359", 00:12:13.419 "is_configured": true, 00:12:13.419 "data_offset": 0, 00:12:13.419 "data_size": 65536 00:12:13.419 }, 00:12:13.419 { 00:12:13.419 "name": "BaseBdev3", 00:12:13.419 "uuid": "50bbb92d-cddb-48a6-9708-4c7cb679ba86", 00:12:13.419 "is_configured": true, 00:12:13.419 "data_offset": 0, 00:12:13.419 "data_size": 65536 00:12:13.419 }, 00:12:13.419 { 00:12:13.419 "name": "BaseBdev4", 00:12:13.419 "uuid": "62e0ccb8-a453-4d0f-8877-7a830aaaaaf6", 00:12:13.419 "is_configured": true, 00:12:13.419 "data_offset": 0, 00:12:13.419 "data_size": 65536 00:12:13.419 } 00:12:13.419 ] 00:12:13.419 }' 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.419 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.678 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:13.678 [2024-12-06 18:09:25.838889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:13.937 "name": "Existed_Raid", 00:12:13.937 "aliases": [ 00:12:13.937 "2fc19116-1462-4c96-b606-751736941145" 00:12:13.937 ], 00:12:13.937 "product_name": "Raid Volume", 00:12:13.937 "block_size": 512, 00:12:13.937 "num_blocks": 65536, 00:12:13.937 "uuid": "2fc19116-1462-4c96-b606-751736941145", 00:12:13.937 "assigned_rate_limits": { 00:12:13.937 "rw_ios_per_sec": 0, 00:12:13.937 "rw_mbytes_per_sec": 0, 00:12:13.937 "r_mbytes_per_sec": 0, 00:12:13.937 "w_mbytes_per_sec": 0 00:12:13.937 }, 00:12:13.937 "claimed": false, 00:12:13.937 "zoned": false, 00:12:13.937 "supported_io_types": { 00:12:13.937 "read": true, 00:12:13.937 "write": true, 00:12:13.937 "unmap": false, 00:12:13.937 "flush": false, 00:12:13.937 "reset": true, 00:12:13.937 "nvme_admin": false, 00:12:13.937 "nvme_io": false, 00:12:13.937 "nvme_io_md": false, 00:12:13.937 "write_zeroes": true, 00:12:13.937 "zcopy": false, 00:12:13.937 "get_zone_info": false, 00:12:13.937 "zone_management": false, 00:12:13.937 "zone_append": false, 00:12:13.937 "compare": false, 00:12:13.937 "compare_and_write": false, 00:12:13.937 "abort": false, 00:12:13.937 "seek_hole": false, 00:12:13.937 "seek_data": false, 00:12:13.937 "copy": false, 00:12:13.937 "nvme_iov_md": false 00:12:13.937 }, 00:12:13.937 "memory_domains": [ 00:12:13.937 { 00:12:13.937 "dma_device_id": "system", 00:12:13.937 "dma_device_type": 1 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.937 "dma_device_type": 2 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "dma_device_id": "system", 00:12:13.937 "dma_device_type": 1 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.937 "dma_device_type": 2 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "dma_device_id": "system", 00:12:13.937 "dma_device_type": 1 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.937 "dma_device_type": 2 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "dma_device_id": "system", 00:12:13.937 "dma_device_type": 1 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.937 "dma_device_type": 2 00:12:13.937 } 00:12:13.937 ], 00:12:13.937 "driver_specific": { 00:12:13.937 "raid": { 00:12:13.937 "uuid": "2fc19116-1462-4c96-b606-751736941145", 00:12:13.937 "strip_size_kb": 0, 00:12:13.937 "state": "online", 00:12:13.937 "raid_level": "raid1", 00:12:13.937 "superblock": false, 00:12:13.937 "num_base_bdevs": 4, 00:12:13.937 "num_base_bdevs_discovered": 4, 00:12:13.937 "num_base_bdevs_operational": 4, 00:12:13.937 "base_bdevs_list": [ 00:12:13.937 { 00:12:13.937 "name": "BaseBdev1", 00:12:13.937 "uuid": "03ec3569-9af2-4587-ac05-19a75b67f504", 00:12:13.937 "is_configured": true, 00:12:13.937 "data_offset": 0, 00:12:13.937 "data_size": 65536 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "name": "BaseBdev2", 00:12:13.937 "uuid": "eaeecefe-27db-4c11-a9ee-a39ebecf4359", 00:12:13.937 "is_configured": true, 00:12:13.937 "data_offset": 0, 00:12:13.937 "data_size": 65536 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "name": "BaseBdev3", 00:12:13.937 "uuid": "50bbb92d-cddb-48a6-9708-4c7cb679ba86", 00:12:13.937 "is_configured": true, 00:12:13.937 "data_offset": 0, 00:12:13.937 "data_size": 65536 00:12:13.937 }, 00:12:13.937 { 00:12:13.937 "name": "BaseBdev4", 00:12:13.937 "uuid": "62e0ccb8-a453-4d0f-8877-7a830aaaaaf6", 00:12:13.937 "is_configured": true, 00:12:13.937 "data_offset": 0, 00:12:13.937 "data_size": 65536 00:12:13.937 } 00:12:13.937 ] 00:12:13.937 } 00:12:13.937 } 00:12:13.937 }' 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:13.937 BaseBdev2 00:12:13.937 BaseBdev3 00:12:13.937 BaseBdev4' 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.937 18:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.937 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.197 [2024-12-06 18:09:26.158028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.197 "name": "Existed_Raid", 00:12:14.197 "uuid": "2fc19116-1462-4c96-b606-751736941145", 00:12:14.197 "strip_size_kb": 0, 00:12:14.197 "state": "online", 00:12:14.197 "raid_level": "raid1", 00:12:14.197 "superblock": false, 00:12:14.197 "num_base_bdevs": 4, 00:12:14.197 "num_base_bdevs_discovered": 3, 00:12:14.197 "num_base_bdevs_operational": 3, 00:12:14.197 "base_bdevs_list": [ 00:12:14.197 { 00:12:14.197 "name": null, 00:12:14.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.197 "is_configured": false, 00:12:14.197 "data_offset": 0, 00:12:14.197 "data_size": 65536 00:12:14.197 }, 00:12:14.197 { 00:12:14.197 "name": "BaseBdev2", 00:12:14.197 "uuid": "eaeecefe-27db-4c11-a9ee-a39ebecf4359", 00:12:14.197 "is_configured": true, 00:12:14.197 "data_offset": 0, 00:12:14.197 "data_size": 65536 00:12:14.197 }, 00:12:14.197 { 00:12:14.197 "name": "BaseBdev3", 00:12:14.197 "uuid": "50bbb92d-cddb-48a6-9708-4c7cb679ba86", 00:12:14.197 "is_configured": true, 00:12:14.197 "data_offset": 0, 00:12:14.197 "data_size": 65536 00:12:14.197 }, 00:12:14.197 { 00:12:14.197 "name": "BaseBdev4", 00:12:14.197 "uuid": "62e0ccb8-a453-4d0f-8877-7a830aaaaaf6", 00:12:14.197 "is_configured": true, 00:12:14.197 "data_offset": 0, 00:12:14.197 "data_size": 65536 00:12:14.197 } 00:12:14.197 ] 00:12:14.197 }' 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.197 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.767 [2024-12-06 18:09:26.827724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:14.767 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.027 18:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.027 [2024-12-06 18:09:26.983317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.027 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.027 [2024-12-06 18:09:27.146895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:15.027 [2024-12-06 18:09:27.147057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.287 [2024-12-06 18:09:27.245396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.287 [2024-12-06 18:09:27.245538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.287 [2024-12-06 18:09:27.245582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.287 BaseBdev2 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.287 [ 00:12:15.287 { 00:12:15.287 "name": "BaseBdev2", 00:12:15.287 "aliases": [ 00:12:15.287 "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd" 00:12:15.287 ], 00:12:15.287 "product_name": "Malloc disk", 00:12:15.287 "block_size": 512, 00:12:15.287 "num_blocks": 65536, 00:12:15.287 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:15.287 "assigned_rate_limits": { 00:12:15.287 "rw_ios_per_sec": 0, 00:12:15.287 "rw_mbytes_per_sec": 0, 00:12:15.287 "r_mbytes_per_sec": 0, 00:12:15.287 "w_mbytes_per_sec": 0 00:12:15.287 }, 00:12:15.287 "claimed": false, 00:12:15.287 "zoned": false, 00:12:15.287 "supported_io_types": { 00:12:15.287 "read": true, 00:12:15.287 "write": true, 00:12:15.287 "unmap": true, 00:12:15.287 "flush": true, 00:12:15.287 "reset": true, 00:12:15.287 "nvme_admin": false, 00:12:15.287 "nvme_io": false, 00:12:15.287 "nvme_io_md": false, 00:12:15.287 "write_zeroes": true, 00:12:15.287 "zcopy": true, 00:12:15.287 "get_zone_info": false, 00:12:15.287 "zone_management": false, 00:12:15.287 "zone_append": false, 00:12:15.287 "compare": false, 00:12:15.287 "compare_and_write": false, 00:12:15.287 "abort": true, 00:12:15.287 "seek_hole": false, 00:12:15.287 "seek_data": false, 00:12:15.287 "copy": true, 00:12:15.287 "nvme_iov_md": false 00:12:15.287 }, 00:12:15.287 "memory_domains": [ 00:12:15.287 { 00:12:15.287 "dma_device_id": "system", 00:12:15.287 "dma_device_type": 1 00:12:15.287 }, 00:12:15.287 { 00:12:15.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.287 "dma_device_type": 2 00:12:15.287 } 00:12:15.287 ], 00:12:15.287 "driver_specific": {} 00:12:15.287 } 00:12:15.287 ] 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.287 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.288 BaseBdev3 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.288 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.547 [ 00:12:15.547 { 00:12:15.547 "name": "BaseBdev3", 00:12:15.547 "aliases": [ 00:12:15.547 "16a40e33-7b57-42dd-879b-317b27e4f168" 00:12:15.547 ], 00:12:15.547 "product_name": "Malloc disk", 00:12:15.547 "block_size": 512, 00:12:15.548 "num_blocks": 65536, 00:12:15.548 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:15.548 "assigned_rate_limits": { 00:12:15.548 "rw_ios_per_sec": 0, 00:12:15.548 "rw_mbytes_per_sec": 0, 00:12:15.548 "r_mbytes_per_sec": 0, 00:12:15.548 "w_mbytes_per_sec": 0 00:12:15.548 }, 00:12:15.548 "claimed": false, 00:12:15.548 "zoned": false, 00:12:15.548 "supported_io_types": { 00:12:15.548 "read": true, 00:12:15.548 "write": true, 00:12:15.548 "unmap": true, 00:12:15.548 "flush": true, 00:12:15.548 "reset": true, 00:12:15.548 "nvme_admin": false, 00:12:15.548 "nvme_io": false, 00:12:15.548 "nvme_io_md": false, 00:12:15.548 "write_zeroes": true, 00:12:15.548 "zcopy": true, 00:12:15.548 "get_zone_info": false, 00:12:15.548 "zone_management": false, 00:12:15.548 "zone_append": false, 00:12:15.548 "compare": false, 00:12:15.548 "compare_and_write": false, 00:12:15.548 "abort": true, 00:12:15.548 "seek_hole": false, 00:12:15.548 "seek_data": false, 00:12:15.548 "copy": true, 00:12:15.548 "nvme_iov_md": false 00:12:15.548 }, 00:12:15.548 "memory_domains": [ 00:12:15.548 { 00:12:15.548 "dma_device_id": "system", 00:12:15.548 "dma_device_type": 1 00:12:15.548 }, 00:12:15.548 { 00:12:15.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.548 "dma_device_type": 2 00:12:15.548 } 00:12:15.548 ], 00:12:15.548 "driver_specific": {} 00:12:15.548 } 00:12:15.548 ] 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.548 BaseBdev4 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.548 [ 00:12:15.548 { 00:12:15.548 "name": "BaseBdev4", 00:12:15.548 "aliases": [ 00:12:15.548 "35ea1d6a-b0db-4060-a454-e62113158ea9" 00:12:15.548 ], 00:12:15.548 "product_name": "Malloc disk", 00:12:15.548 "block_size": 512, 00:12:15.548 "num_blocks": 65536, 00:12:15.548 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:15.548 "assigned_rate_limits": { 00:12:15.548 "rw_ios_per_sec": 0, 00:12:15.548 "rw_mbytes_per_sec": 0, 00:12:15.548 "r_mbytes_per_sec": 0, 00:12:15.548 "w_mbytes_per_sec": 0 00:12:15.548 }, 00:12:15.548 "claimed": false, 00:12:15.548 "zoned": false, 00:12:15.548 "supported_io_types": { 00:12:15.548 "read": true, 00:12:15.548 "write": true, 00:12:15.548 "unmap": true, 00:12:15.548 "flush": true, 00:12:15.548 "reset": true, 00:12:15.548 "nvme_admin": false, 00:12:15.548 "nvme_io": false, 00:12:15.548 "nvme_io_md": false, 00:12:15.548 "write_zeroes": true, 00:12:15.548 "zcopy": true, 00:12:15.548 "get_zone_info": false, 00:12:15.548 "zone_management": false, 00:12:15.548 "zone_append": false, 00:12:15.548 "compare": false, 00:12:15.548 "compare_and_write": false, 00:12:15.548 "abort": true, 00:12:15.548 "seek_hole": false, 00:12:15.548 "seek_data": false, 00:12:15.548 "copy": true, 00:12:15.548 "nvme_iov_md": false 00:12:15.548 }, 00:12:15.548 "memory_domains": [ 00:12:15.548 { 00:12:15.548 "dma_device_id": "system", 00:12:15.548 "dma_device_type": 1 00:12:15.548 }, 00:12:15.548 { 00:12:15.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.548 "dma_device_type": 2 00:12:15.548 } 00:12:15.548 ], 00:12:15.548 "driver_specific": {} 00:12:15.548 } 00:12:15.548 ] 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.548 [2024-12-06 18:09:27.568617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.548 [2024-12-06 18:09:27.568741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.548 [2024-12-06 18:09:27.568796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.548 [2024-12-06 18:09:27.570674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.548 [2024-12-06 18:09:27.570781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.548 "name": "Existed_Raid", 00:12:15.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.548 "strip_size_kb": 0, 00:12:15.548 "state": "configuring", 00:12:15.548 "raid_level": "raid1", 00:12:15.548 "superblock": false, 00:12:15.548 "num_base_bdevs": 4, 00:12:15.548 "num_base_bdevs_discovered": 3, 00:12:15.548 "num_base_bdevs_operational": 4, 00:12:15.548 "base_bdevs_list": [ 00:12:15.548 { 00:12:15.548 "name": "BaseBdev1", 00:12:15.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.548 "is_configured": false, 00:12:15.548 "data_offset": 0, 00:12:15.548 "data_size": 0 00:12:15.548 }, 00:12:15.548 { 00:12:15.548 "name": "BaseBdev2", 00:12:15.548 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:15.548 "is_configured": true, 00:12:15.548 "data_offset": 0, 00:12:15.548 "data_size": 65536 00:12:15.548 }, 00:12:15.548 { 00:12:15.548 "name": "BaseBdev3", 00:12:15.548 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:15.548 "is_configured": true, 00:12:15.548 "data_offset": 0, 00:12:15.548 "data_size": 65536 00:12:15.548 }, 00:12:15.548 { 00:12:15.548 "name": "BaseBdev4", 00:12:15.548 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:15.548 "is_configured": true, 00:12:15.548 "data_offset": 0, 00:12:15.548 "data_size": 65536 00:12:15.548 } 00:12:15.548 ] 00:12:15.548 }' 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.548 18:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.117 [2024-12-06 18:09:28.067767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.117 "name": "Existed_Raid", 00:12:16.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.117 "strip_size_kb": 0, 00:12:16.117 "state": "configuring", 00:12:16.117 "raid_level": "raid1", 00:12:16.117 "superblock": false, 00:12:16.117 "num_base_bdevs": 4, 00:12:16.117 "num_base_bdevs_discovered": 2, 00:12:16.117 "num_base_bdevs_operational": 4, 00:12:16.117 "base_bdevs_list": [ 00:12:16.117 { 00:12:16.117 "name": "BaseBdev1", 00:12:16.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.117 "is_configured": false, 00:12:16.117 "data_offset": 0, 00:12:16.117 "data_size": 0 00:12:16.117 }, 00:12:16.117 { 00:12:16.117 "name": null, 00:12:16.117 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:16.117 "is_configured": false, 00:12:16.117 "data_offset": 0, 00:12:16.117 "data_size": 65536 00:12:16.117 }, 00:12:16.117 { 00:12:16.117 "name": "BaseBdev3", 00:12:16.117 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:16.117 "is_configured": true, 00:12:16.117 "data_offset": 0, 00:12:16.117 "data_size": 65536 00:12:16.117 }, 00:12:16.117 { 00:12:16.117 "name": "BaseBdev4", 00:12:16.117 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:16.117 "is_configured": true, 00:12:16.117 "data_offset": 0, 00:12:16.117 "data_size": 65536 00:12:16.117 } 00:12:16.117 ] 00:12:16.117 }' 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.117 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 [2024-12-06 18:09:28.636892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.685 BaseBdev1 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.685 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 [ 00:12:16.685 { 00:12:16.685 "name": "BaseBdev1", 00:12:16.685 "aliases": [ 00:12:16.685 "df68a234-9e95-4a69-a097-2b808b6379ec" 00:12:16.685 ], 00:12:16.685 "product_name": "Malloc disk", 00:12:16.685 "block_size": 512, 00:12:16.685 "num_blocks": 65536, 00:12:16.685 "uuid": "df68a234-9e95-4a69-a097-2b808b6379ec", 00:12:16.685 "assigned_rate_limits": { 00:12:16.685 "rw_ios_per_sec": 0, 00:12:16.685 "rw_mbytes_per_sec": 0, 00:12:16.685 "r_mbytes_per_sec": 0, 00:12:16.685 "w_mbytes_per_sec": 0 00:12:16.685 }, 00:12:16.685 "claimed": true, 00:12:16.685 "claim_type": "exclusive_write", 00:12:16.685 "zoned": false, 00:12:16.685 "supported_io_types": { 00:12:16.685 "read": true, 00:12:16.685 "write": true, 00:12:16.685 "unmap": true, 00:12:16.685 "flush": true, 00:12:16.685 "reset": true, 00:12:16.685 "nvme_admin": false, 00:12:16.685 "nvme_io": false, 00:12:16.685 "nvme_io_md": false, 00:12:16.685 "write_zeroes": true, 00:12:16.685 "zcopy": true, 00:12:16.685 "get_zone_info": false, 00:12:16.685 "zone_management": false, 00:12:16.685 "zone_append": false, 00:12:16.685 "compare": false, 00:12:16.685 "compare_and_write": false, 00:12:16.685 "abort": true, 00:12:16.685 "seek_hole": false, 00:12:16.685 "seek_data": false, 00:12:16.685 "copy": true, 00:12:16.685 "nvme_iov_md": false 00:12:16.685 }, 00:12:16.685 "memory_domains": [ 00:12:16.685 { 00:12:16.685 "dma_device_id": "system", 00:12:16.685 "dma_device_type": 1 00:12:16.685 }, 00:12:16.685 { 00:12:16.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.686 "dma_device_type": 2 00:12:16.686 } 00:12:16.686 ], 00:12:16.686 "driver_specific": {} 00:12:16.686 } 00:12:16.686 ] 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.686 "name": "Existed_Raid", 00:12:16.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.686 "strip_size_kb": 0, 00:12:16.686 "state": "configuring", 00:12:16.686 "raid_level": "raid1", 00:12:16.686 "superblock": false, 00:12:16.686 "num_base_bdevs": 4, 00:12:16.686 "num_base_bdevs_discovered": 3, 00:12:16.686 "num_base_bdevs_operational": 4, 00:12:16.686 "base_bdevs_list": [ 00:12:16.686 { 00:12:16.686 "name": "BaseBdev1", 00:12:16.686 "uuid": "df68a234-9e95-4a69-a097-2b808b6379ec", 00:12:16.686 "is_configured": true, 00:12:16.686 "data_offset": 0, 00:12:16.686 "data_size": 65536 00:12:16.686 }, 00:12:16.686 { 00:12:16.686 "name": null, 00:12:16.686 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:16.686 "is_configured": false, 00:12:16.686 "data_offset": 0, 00:12:16.686 "data_size": 65536 00:12:16.686 }, 00:12:16.686 { 00:12:16.686 "name": "BaseBdev3", 00:12:16.686 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:16.686 "is_configured": true, 00:12:16.686 "data_offset": 0, 00:12:16.686 "data_size": 65536 00:12:16.686 }, 00:12:16.686 { 00:12:16.686 "name": "BaseBdev4", 00:12:16.686 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:16.686 "is_configured": true, 00:12:16.686 "data_offset": 0, 00:12:16.686 "data_size": 65536 00:12:16.686 } 00:12:16.686 ] 00:12:16.686 }' 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.686 18:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.256 [2024-12-06 18:09:29.172119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.256 "name": "Existed_Raid", 00:12:17.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.256 "strip_size_kb": 0, 00:12:17.256 "state": "configuring", 00:12:17.256 "raid_level": "raid1", 00:12:17.256 "superblock": false, 00:12:17.256 "num_base_bdevs": 4, 00:12:17.256 "num_base_bdevs_discovered": 2, 00:12:17.256 "num_base_bdevs_operational": 4, 00:12:17.256 "base_bdevs_list": [ 00:12:17.256 { 00:12:17.256 "name": "BaseBdev1", 00:12:17.256 "uuid": "df68a234-9e95-4a69-a097-2b808b6379ec", 00:12:17.256 "is_configured": true, 00:12:17.256 "data_offset": 0, 00:12:17.256 "data_size": 65536 00:12:17.256 }, 00:12:17.256 { 00:12:17.256 "name": null, 00:12:17.256 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:17.256 "is_configured": false, 00:12:17.256 "data_offset": 0, 00:12:17.256 "data_size": 65536 00:12:17.256 }, 00:12:17.256 { 00:12:17.256 "name": null, 00:12:17.256 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:17.256 "is_configured": false, 00:12:17.256 "data_offset": 0, 00:12:17.256 "data_size": 65536 00:12:17.256 }, 00:12:17.256 { 00:12:17.256 "name": "BaseBdev4", 00:12:17.256 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:17.256 "is_configured": true, 00:12:17.256 "data_offset": 0, 00:12:17.256 "data_size": 65536 00:12:17.256 } 00:12:17.256 ] 00:12:17.256 }' 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.256 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.516 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:17.516 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.516 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.516 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.516 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.776 [2024-12-06 18:09:29.695212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.776 "name": "Existed_Raid", 00:12:17.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.776 "strip_size_kb": 0, 00:12:17.776 "state": "configuring", 00:12:17.776 "raid_level": "raid1", 00:12:17.776 "superblock": false, 00:12:17.776 "num_base_bdevs": 4, 00:12:17.776 "num_base_bdevs_discovered": 3, 00:12:17.776 "num_base_bdevs_operational": 4, 00:12:17.776 "base_bdevs_list": [ 00:12:17.776 { 00:12:17.776 "name": "BaseBdev1", 00:12:17.776 "uuid": "df68a234-9e95-4a69-a097-2b808b6379ec", 00:12:17.776 "is_configured": true, 00:12:17.776 "data_offset": 0, 00:12:17.776 "data_size": 65536 00:12:17.776 }, 00:12:17.776 { 00:12:17.776 "name": null, 00:12:17.776 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:17.776 "is_configured": false, 00:12:17.776 "data_offset": 0, 00:12:17.776 "data_size": 65536 00:12:17.776 }, 00:12:17.776 { 00:12:17.776 "name": "BaseBdev3", 00:12:17.776 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:17.776 "is_configured": true, 00:12:17.776 "data_offset": 0, 00:12:17.776 "data_size": 65536 00:12:17.776 }, 00:12:17.776 { 00:12:17.776 "name": "BaseBdev4", 00:12:17.776 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:17.776 "is_configured": true, 00:12:17.776 "data_offset": 0, 00:12:17.776 "data_size": 65536 00:12:17.776 } 00:12:17.776 ] 00:12:17.776 }' 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.776 18:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.035 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.035 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:18.035 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.035 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.035 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.035 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:18.035 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:18.035 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.035 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.035 [2024-12-06 18:09:30.194392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.295 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.295 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.296 "name": "Existed_Raid", 00:12:18.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.296 "strip_size_kb": 0, 00:12:18.296 "state": "configuring", 00:12:18.296 "raid_level": "raid1", 00:12:18.296 "superblock": false, 00:12:18.296 "num_base_bdevs": 4, 00:12:18.296 "num_base_bdevs_discovered": 2, 00:12:18.296 "num_base_bdevs_operational": 4, 00:12:18.296 "base_bdevs_list": [ 00:12:18.296 { 00:12:18.296 "name": null, 00:12:18.296 "uuid": "df68a234-9e95-4a69-a097-2b808b6379ec", 00:12:18.296 "is_configured": false, 00:12:18.296 "data_offset": 0, 00:12:18.296 "data_size": 65536 00:12:18.296 }, 00:12:18.296 { 00:12:18.296 "name": null, 00:12:18.296 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:18.296 "is_configured": false, 00:12:18.296 "data_offset": 0, 00:12:18.296 "data_size": 65536 00:12:18.296 }, 00:12:18.296 { 00:12:18.296 "name": "BaseBdev3", 00:12:18.296 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:18.296 "is_configured": true, 00:12:18.296 "data_offset": 0, 00:12:18.296 "data_size": 65536 00:12:18.296 }, 00:12:18.296 { 00:12:18.296 "name": "BaseBdev4", 00:12:18.296 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:18.296 "is_configured": true, 00:12:18.296 "data_offset": 0, 00:12:18.296 "data_size": 65536 00:12:18.296 } 00:12:18.296 ] 00:12:18.296 }' 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.296 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.866 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.866 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.866 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.866 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:18.866 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.866 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:18.866 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.867 [2024-12-06 18:09:30.828508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.867 "name": "Existed_Raid", 00:12:18.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.867 "strip_size_kb": 0, 00:12:18.867 "state": "configuring", 00:12:18.867 "raid_level": "raid1", 00:12:18.867 "superblock": false, 00:12:18.867 "num_base_bdevs": 4, 00:12:18.867 "num_base_bdevs_discovered": 3, 00:12:18.867 "num_base_bdevs_operational": 4, 00:12:18.867 "base_bdevs_list": [ 00:12:18.867 { 00:12:18.867 "name": null, 00:12:18.867 "uuid": "df68a234-9e95-4a69-a097-2b808b6379ec", 00:12:18.867 "is_configured": false, 00:12:18.867 "data_offset": 0, 00:12:18.867 "data_size": 65536 00:12:18.867 }, 00:12:18.867 { 00:12:18.867 "name": "BaseBdev2", 00:12:18.867 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:18.867 "is_configured": true, 00:12:18.867 "data_offset": 0, 00:12:18.867 "data_size": 65536 00:12:18.867 }, 00:12:18.867 { 00:12:18.867 "name": "BaseBdev3", 00:12:18.867 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:18.867 "is_configured": true, 00:12:18.867 "data_offset": 0, 00:12:18.867 "data_size": 65536 00:12:18.867 }, 00:12:18.867 { 00:12:18.867 "name": "BaseBdev4", 00:12:18.867 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:18.867 "is_configured": true, 00:12:18.867 "data_offset": 0, 00:12:18.867 "data_size": 65536 00:12:18.867 } 00:12:18.867 ] 00:12:18.867 }' 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.867 18:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.126 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u df68a234-9e95-4a69-a097-2b808b6379ec 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.385 [2024-12-06 18:09:31.372960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:19.385 [2024-12-06 18:09:31.373012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:19.385 [2024-12-06 18:09:31.373021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:19.385 [2024-12-06 18:09:31.373321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:19.385 [2024-12-06 18:09:31.373481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:19.385 [2024-12-06 18:09:31.373498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:19.385 [2024-12-06 18:09:31.373782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.385 NewBaseBdev 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.385 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.385 [ 00:12:19.385 { 00:12:19.385 "name": "NewBaseBdev", 00:12:19.385 "aliases": [ 00:12:19.386 "df68a234-9e95-4a69-a097-2b808b6379ec" 00:12:19.386 ], 00:12:19.386 "product_name": "Malloc disk", 00:12:19.386 "block_size": 512, 00:12:19.386 "num_blocks": 65536, 00:12:19.386 "uuid": "df68a234-9e95-4a69-a097-2b808b6379ec", 00:12:19.386 "assigned_rate_limits": { 00:12:19.386 "rw_ios_per_sec": 0, 00:12:19.386 "rw_mbytes_per_sec": 0, 00:12:19.386 "r_mbytes_per_sec": 0, 00:12:19.386 "w_mbytes_per_sec": 0 00:12:19.386 }, 00:12:19.386 "claimed": true, 00:12:19.386 "claim_type": "exclusive_write", 00:12:19.386 "zoned": false, 00:12:19.386 "supported_io_types": { 00:12:19.386 "read": true, 00:12:19.386 "write": true, 00:12:19.386 "unmap": true, 00:12:19.386 "flush": true, 00:12:19.386 "reset": true, 00:12:19.386 "nvme_admin": false, 00:12:19.386 "nvme_io": false, 00:12:19.386 "nvme_io_md": false, 00:12:19.386 "write_zeroes": true, 00:12:19.386 "zcopy": true, 00:12:19.386 "get_zone_info": false, 00:12:19.386 "zone_management": false, 00:12:19.386 "zone_append": false, 00:12:19.386 "compare": false, 00:12:19.386 "compare_and_write": false, 00:12:19.386 "abort": true, 00:12:19.386 "seek_hole": false, 00:12:19.386 "seek_data": false, 00:12:19.386 "copy": true, 00:12:19.386 "nvme_iov_md": false 00:12:19.386 }, 00:12:19.386 "memory_domains": [ 00:12:19.386 { 00:12:19.386 "dma_device_id": "system", 00:12:19.386 "dma_device_type": 1 00:12:19.386 }, 00:12:19.386 { 00:12:19.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.386 "dma_device_type": 2 00:12:19.386 } 00:12:19.386 ], 00:12:19.386 "driver_specific": {} 00:12:19.386 } 00:12:19.386 ] 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.386 "name": "Existed_Raid", 00:12:19.386 "uuid": "02a625f5-2e1d-494f-9e9a-a80fc57e8fa7", 00:12:19.386 "strip_size_kb": 0, 00:12:19.386 "state": "online", 00:12:19.386 "raid_level": "raid1", 00:12:19.386 "superblock": false, 00:12:19.386 "num_base_bdevs": 4, 00:12:19.386 "num_base_bdevs_discovered": 4, 00:12:19.386 "num_base_bdevs_operational": 4, 00:12:19.386 "base_bdevs_list": [ 00:12:19.386 { 00:12:19.386 "name": "NewBaseBdev", 00:12:19.386 "uuid": "df68a234-9e95-4a69-a097-2b808b6379ec", 00:12:19.386 "is_configured": true, 00:12:19.386 "data_offset": 0, 00:12:19.386 "data_size": 65536 00:12:19.386 }, 00:12:19.386 { 00:12:19.386 "name": "BaseBdev2", 00:12:19.386 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:19.386 "is_configured": true, 00:12:19.386 "data_offset": 0, 00:12:19.386 "data_size": 65536 00:12:19.386 }, 00:12:19.386 { 00:12:19.386 "name": "BaseBdev3", 00:12:19.386 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:19.386 "is_configured": true, 00:12:19.386 "data_offset": 0, 00:12:19.386 "data_size": 65536 00:12:19.386 }, 00:12:19.386 { 00:12:19.386 "name": "BaseBdev4", 00:12:19.386 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:19.386 "is_configured": true, 00:12:19.386 "data_offset": 0, 00:12:19.386 "data_size": 65536 00:12:19.386 } 00:12:19.386 ] 00:12:19.386 }' 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.386 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.955 [2024-12-06 18:09:31.908546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.955 "name": "Existed_Raid", 00:12:19.955 "aliases": [ 00:12:19.955 "02a625f5-2e1d-494f-9e9a-a80fc57e8fa7" 00:12:19.955 ], 00:12:19.955 "product_name": "Raid Volume", 00:12:19.955 "block_size": 512, 00:12:19.955 "num_blocks": 65536, 00:12:19.955 "uuid": "02a625f5-2e1d-494f-9e9a-a80fc57e8fa7", 00:12:19.955 "assigned_rate_limits": { 00:12:19.955 "rw_ios_per_sec": 0, 00:12:19.955 "rw_mbytes_per_sec": 0, 00:12:19.955 "r_mbytes_per_sec": 0, 00:12:19.955 "w_mbytes_per_sec": 0 00:12:19.955 }, 00:12:19.955 "claimed": false, 00:12:19.955 "zoned": false, 00:12:19.955 "supported_io_types": { 00:12:19.955 "read": true, 00:12:19.955 "write": true, 00:12:19.955 "unmap": false, 00:12:19.955 "flush": false, 00:12:19.955 "reset": true, 00:12:19.955 "nvme_admin": false, 00:12:19.955 "nvme_io": false, 00:12:19.955 "nvme_io_md": false, 00:12:19.955 "write_zeroes": true, 00:12:19.955 "zcopy": false, 00:12:19.955 "get_zone_info": false, 00:12:19.955 "zone_management": false, 00:12:19.955 "zone_append": false, 00:12:19.955 "compare": false, 00:12:19.955 "compare_and_write": false, 00:12:19.955 "abort": false, 00:12:19.955 "seek_hole": false, 00:12:19.955 "seek_data": false, 00:12:19.955 "copy": false, 00:12:19.955 "nvme_iov_md": false 00:12:19.955 }, 00:12:19.955 "memory_domains": [ 00:12:19.955 { 00:12:19.955 "dma_device_id": "system", 00:12:19.955 "dma_device_type": 1 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.955 "dma_device_type": 2 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "dma_device_id": "system", 00:12:19.955 "dma_device_type": 1 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.955 "dma_device_type": 2 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "dma_device_id": "system", 00:12:19.955 "dma_device_type": 1 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.955 "dma_device_type": 2 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "dma_device_id": "system", 00:12:19.955 "dma_device_type": 1 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.955 "dma_device_type": 2 00:12:19.955 } 00:12:19.955 ], 00:12:19.955 "driver_specific": { 00:12:19.955 "raid": { 00:12:19.955 "uuid": "02a625f5-2e1d-494f-9e9a-a80fc57e8fa7", 00:12:19.955 "strip_size_kb": 0, 00:12:19.955 "state": "online", 00:12:19.955 "raid_level": "raid1", 00:12:19.955 "superblock": false, 00:12:19.955 "num_base_bdevs": 4, 00:12:19.955 "num_base_bdevs_discovered": 4, 00:12:19.955 "num_base_bdevs_operational": 4, 00:12:19.955 "base_bdevs_list": [ 00:12:19.955 { 00:12:19.955 "name": "NewBaseBdev", 00:12:19.955 "uuid": "df68a234-9e95-4a69-a097-2b808b6379ec", 00:12:19.955 "is_configured": true, 00:12:19.955 "data_offset": 0, 00:12:19.955 "data_size": 65536 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "name": "BaseBdev2", 00:12:19.955 "uuid": "e657204e-0a3c-4ab1-a04e-6f12a4d74dcd", 00:12:19.955 "is_configured": true, 00:12:19.955 "data_offset": 0, 00:12:19.955 "data_size": 65536 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "name": "BaseBdev3", 00:12:19.955 "uuid": "16a40e33-7b57-42dd-879b-317b27e4f168", 00:12:19.955 "is_configured": true, 00:12:19.955 "data_offset": 0, 00:12:19.955 "data_size": 65536 00:12:19.955 }, 00:12:19.955 { 00:12:19.955 "name": "BaseBdev4", 00:12:19.955 "uuid": "35ea1d6a-b0db-4060-a454-e62113158ea9", 00:12:19.955 "is_configured": true, 00:12:19.955 "data_offset": 0, 00:12:19.955 "data_size": 65536 00:12:19.955 } 00:12:19.955 ] 00:12:19.955 } 00:12:19.955 } 00:12:19.955 }' 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.955 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:19.956 BaseBdev2 00:12:19.956 BaseBdev3 00:12:19.956 BaseBdev4' 00:12:19.956 18:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.956 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.214 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.215 [2024-12-06 18:09:32.211615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.215 [2024-12-06 18:09:32.211649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.215 [2024-12-06 18:09:32.211754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.215 [2024-12-06 18:09:32.212102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.215 [2024-12-06 18:09:32.212124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73671 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73671 ']' 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73671 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73671 00:12:20.215 killing process with pid 73671 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73671' 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73671 00:12:20.215 [2024-12-06 18:09:32.247470] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.215 18:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73671 00:12:20.782 [2024-12-06 18:09:32.707941] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.159 ************************************ 00:12:22.159 END TEST raid_state_function_test 00:12:22.159 ************************************ 00:12:22.159 18:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:22.159 00:12:22.159 real 0m12.258s 00:12:22.159 user 0m19.400s 00:12:22.159 sys 0m2.206s 00:12:22.159 18:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.159 18:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.159 18:09:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:22.159 18:09:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:22.159 18:09:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.159 18:09:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.159 ************************************ 00:12:22.159 START TEST raid_state_function_test_sb 00:12:22.159 ************************************ 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74348 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:22.159 Process raid pid: 74348 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74348' 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74348 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74348 ']' 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.159 18:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.160 18:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.160 18:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.160 18:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.160 [2024-12-06 18:09:34.149986] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:12:22.160 [2024-12-06 18:09:34.150130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.418 [2024-12-06 18:09:34.327588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.418 [2024-12-06 18:09:34.456074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.678 [2024-12-06 18:09:34.683892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.678 [2024-12-06 18:09:34.683943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.938 [2024-12-06 18:09:35.031493] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.938 [2024-12-06 18:09:35.031574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.938 [2024-12-06 18:09:35.031587] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.938 [2024-12-06 18:09:35.031599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.938 [2024-12-06 18:09:35.031607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:22.938 [2024-12-06 18:09:35.031618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:22.938 [2024-12-06 18:09:35.031626] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:22.938 [2024-12-06 18:09:35.031637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.938 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.938 "name": "Existed_Raid", 00:12:22.938 "uuid": "0235d84d-0acb-4e2f-a9c2-84ca06f08e34", 00:12:22.938 "strip_size_kb": 0, 00:12:22.938 "state": "configuring", 00:12:22.938 "raid_level": "raid1", 00:12:22.938 "superblock": true, 00:12:22.938 "num_base_bdevs": 4, 00:12:22.938 "num_base_bdevs_discovered": 0, 00:12:22.938 "num_base_bdevs_operational": 4, 00:12:22.938 "base_bdevs_list": [ 00:12:22.938 { 00:12:22.939 "name": "BaseBdev1", 00:12:22.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.939 "is_configured": false, 00:12:22.939 "data_offset": 0, 00:12:22.939 "data_size": 0 00:12:22.939 }, 00:12:22.939 { 00:12:22.939 "name": "BaseBdev2", 00:12:22.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.939 "is_configured": false, 00:12:22.939 "data_offset": 0, 00:12:22.939 "data_size": 0 00:12:22.939 }, 00:12:22.939 { 00:12:22.939 "name": "BaseBdev3", 00:12:22.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.939 "is_configured": false, 00:12:22.939 "data_offset": 0, 00:12:22.939 "data_size": 0 00:12:22.939 }, 00:12:22.939 { 00:12:22.939 "name": "BaseBdev4", 00:12:22.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.939 "is_configured": false, 00:12:22.939 "data_offset": 0, 00:12:22.939 "data_size": 0 00:12:22.939 } 00:12:22.939 ] 00:12:22.939 }' 00:12:22.939 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.939 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.506 [2024-12-06 18:09:35.490668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.506 [2024-12-06 18:09:35.490719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.506 [2024-12-06 18:09:35.502646] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:23.506 [2024-12-06 18:09:35.502693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:23.506 [2024-12-06 18:09:35.502704] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.506 [2024-12-06 18:09:35.502714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.506 [2024-12-06 18:09:35.502722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:23.506 [2024-12-06 18:09:35.502732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:23.506 [2024-12-06 18:09:35.502739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:23.506 [2024-12-06 18:09:35.502749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.506 [2024-12-06 18:09:35.554415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.506 BaseBdev1 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.506 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.506 [ 00:12:23.506 { 00:12:23.506 "name": "BaseBdev1", 00:12:23.506 "aliases": [ 00:12:23.506 "7f2e99b2-d8aa-47ce-95a7-12b6d7921cf3" 00:12:23.506 ], 00:12:23.506 "product_name": "Malloc disk", 00:12:23.506 "block_size": 512, 00:12:23.506 "num_blocks": 65536, 00:12:23.506 "uuid": "7f2e99b2-d8aa-47ce-95a7-12b6d7921cf3", 00:12:23.506 "assigned_rate_limits": { 00:12:23.506 "rw_ios_per_sec": 0, 00:12:23.506 "rw_mbytes_per_sec": 0, 00:12:23.506 "r_mbytes_per_sec": 0, 00:12:23.506 "w_mbytes_per_sec": 0 00:12:23.506 }, 00:12:23.506 "claimed": true, 00:12:23.506 "claim_type": "exclusive_write", 00:12:23.506 "zoned": false, 00:12:23.506 "supported_io_types": { 00:12:23.506 "read": true, 00:12:23.506 "write": true, 00:12:23.506 "unmap": true, 00:12:23.506 "flush": true, 00:12:23.506 "reset": true, 00:12:23.506 "nvme_admin": false, 00:12:23.506 "nvme_io": false, 00:12:23.506 "nvme_io_md": false, 00:12:23.506 "write_zeroes": true, 00:12:23.506 "zcopy": true, 00:12:23.506 "get_zone_info": false, 00:12:23.506 "zone_management": false, 00:12:23.506 "zone_append": false, 00:12:23.506 "compare": false, 00:12:23.506 "compare_and_write": false, 00:12:23.506 "abort": true, 00:12:23.506 "seek_hole": false, 00:12:23.506 "seek_data": false, 00:12:23.506 "copy": true, 00:12:23.506 "nvme_iov_md": false 00:12:23.506 }, 00:12:23.506 "memory_domains": [ 00:12:23.506 { 00:12:23.506 "dma_device_id": "system", 00:12:23.506 "dma_device_type": 1 00:12:23.506 }, 00:12:23.506 { 00:12:23.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.506 "dma_device_type": 2 00:12:23.506 } 00:12:23.506 ], 00:12:23.506 "driver_specific": {} 00:12:23.506 } 00:12:23.506 ] 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.507 "name": "Existed_Raid", 00:12:23.507 "uuid": "93aeb471-d636-4da2-ade2-007ebce6aa7b", 00:12:23.507 "strip_size_kb": 0, 00:12:23.507 "state": "configuring", 00:12:23.507 "raid_level": "raid1", 00:12:23.507 "superblock": true, 00:12:23.507 "num_base_bdevs": 4, 00:12:23.507 "num_base_bdevs_discovered": 1, 00:12:23.507 "num_base_bdevs_operational": 4, 00:12:23.507 "base_bdevs_list": [ 00:12:23.507 { 00:12:23.507 "name": "BaseBdev1", 00:12:23.507 "uuid": "7f2e99b2-d8aa-47ce-95a7-12b6d7921cf3", 00:12:23.507 "is_configured": true, 00:12:23.507 "data_offset": 2048, 00:12:23.507 "data_size": 63488 00:12:23.507 }, 00:12:23.507 { 00:12:23.507 "name": "BaseBdev2", 00:12:23.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.507 "is_configured": false, 00:12:23.507 "data_offset": 0, 00:12:23.507 "data_size": 0 00:12:23.507 }, 00:12:23.507 { 00:12:23.507 "name": "BaseBdev3", 00:12:23.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.507 "is_configured": false, 00:12:23.507 "data_offset": 0, 00:12:23.507 "data_size": 0 00:12:23.507 }, 00:12:23.507 { 00:12:23.507 "name": "BaseBdev4", 00:12:23.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.507 "is_configured": false, 00:12:23.507 "data_offset": 0, 00:12:23.507 "data_size": 0 00:12:23.507 } 00:12:23.507 ] 00:12:23.507 }' 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.507 18:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.073 [2024-12-06 18:09:36.061675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:24.073 [2024-12-06 18:09:36.061748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.073 [2024-12-06 18:09:36.073701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.073 [2024-12-06 18:09:36.075803] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:24.073 [2024-12-06 18:09:36.075854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:24.073 [2024-12-06 18:09:36.075865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:24.073 [2024-12-06 18:09:36.075876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:24.073 [2024-12-06 18:09:36.075884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:24.073 [2024-12-06 18:09:36.075894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.073 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.073 "name": "Existed_Raid", 00:12:24.073 "uuid": "afd25045-83cb-4a32-adec-82549ce9ecdc", 00:12:24.073 "strip_size_kb": 0, 00:12:24.073 "state": "configuring", 00:12:24.073 "raid_level": "raid1", 00:12:24.073 "superblock": true, 00:12:24.073 "num_base_bdevs": 4, 00:12:24.073 "num_base_bdevs_discovered": 1, 00:12:24.073 "num_base_bdevs_operational": 4, 00:12:24.073 "base_bdevs_list": [ 00:12:24.073 { 00:12:24.073 "name": "BaseBdev1", 00:12:24.073 "uuid": "7f2e99b2-d8aa-47ce-95a7-12b6d7921cf3", 00:12:24.073 "is_configured": true, 00:12:24.073 "data_offset": 2048, 00:12:24.073 "data_size": 63488 00:12:24.073 }, 00:12:24.073 { 00:12:24.073 "name": "BaseBdev2", 00:12:24.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.073 "is_configured": false, 00:12:24.073 "data_offset": 0, 00:12:24.073 "data_size": 0 00:12:24.073 }, 00:12:24.073 { 00:12:24.073 "name": "BaseBdev3", 00:12:24.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.073 "is_configured": false, 00:12:24.073 "data_offset": 0, 00:12:24.073 "data_size": 0 00:12:24.073 }, 00:12:24.073 { 00:12:24.073 "name": "BaseBdev4", 00:12:24.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.073 "is_configured": false, 00:12:24.073 "data_offset": 0, 00:12:24.073 "data_size": 0 00:12:24.073 } 00:12:24.073 ] 00:12:24.074 }' 00:12:24.074 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.074 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.642 BaseBdev2 00:12:24.642 [2024-12-06 18:09:36.599962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.642 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.642 [ 00:12:24.642 { 00:12:24.642 "name": "BaseBdev2", 00:12:24.642 "aliases": [ 00:12:24.642 "128b1d6b-2e3f-47c3-a1b8-7cfadb5d179e" 00:12:24.642 ], 00:12:24.642 "product_name": "Malloc disk", 00:12:24.642 "block_size": 512, 00:12:24.642 "num_blocks": 65536, 00:12:24.642 "uuid": "128b1d6b-2e3f-47c3-a1b8-7cfadb5d179e", 00:12:24.642 "assigned_rate_limits": { 00:12:24.642 "rw_ios_per_sec": 0, 00:12:24.642 "rw_mbytes_per_sec": 0, 00:12:24.642 "r_mbytes_per_sec": 0, 00:12:24.642 "w_mbytes_per_sec": 0 00:12:24.642 }, 00:12:24.642 "claimed": true, 00:12:24.642 "claim_type": "exclusive_write", 00:12:24.642 "zoned": false, 00:12:24.642 "supported_io_types": { 00:12:24.642 "read": true, 00:12:24.642 "write": true, 00:12:24.642 "unmap": true, 00:12:24.643 "flush": true, 00:12:24.643 "reset": true, 00:12:24.643 "nvme_admin": false, 00:12:24.643 "nvme_io": false, 00:12:24.643 "nvme_io_md": false, 00:12:24.643 "write_zeroes": true, 00:12:24.643 "zcopy": true, 00:12:24.643 "get_zone_info": false, 00:12:24.643 "zone_management": false, 00:12:24.643 "zone_append": false, 00:12:24.643 "compare": false, 00:12:24.643 "compare_and_write": false, 00:12:24.643 "abort": true, 00:12:24.643 "seek_hole": false, 00:12:24.643 "seek_data": false, 00:12:24.643 "copy": true, 00:12:24.643 "nvme_iov_md": false 00:12:24.643 }, 00:12:24.643 "memory_domains": [ 00:12:24.643 { 00:12:24.643 "dma_device_id": "system", 00:12:24.643 "dma_device_type": 1 00:12:24.643 }, 00:12:24.643 { 00:12:24.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.643 "dma_device_type": 2 00:12:24.643 } 00:12:24.643 ], 00:12:24.643 "driver_specific": {} 00:12:24.643 } 00:12:24.643 ] 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.643 "name": "Existed_Raid", 00:12:24.643 "uuid": "afd25045-83cb-4a32-adec-82549ce9ecdc", 00:12:24.643 "strip_size_kb": 0, 00:12:24.643 "state": "configuring", 00:12:24.643 "raid_level": "raid1", 00:12:24.643 "superblock": true, 00:12:24.643 "num_base_bdevs": 4, 00:12:24.643 "num_base_bdevs_discovered": 2, 00:12:24.643 "num_base_bdevs_operational": 4, 00:12:24.643 "base_bdevs_list": [ 00:12:24.643 { 00:12:24.643 "name": "BaseBdev1", 00:12:24.643 "uuid": "7f2e99b2-d8aa-47ce-95a7-12b6d7921cf3", 00:12:24.643 "is_configured": true, 00:12:24.643 "data_offset": 2048, 00:12:24.643 "data_size": 63488 00:12:24.643 }, 00:12:24.643 { 00:12:24.643 "name": "BaseBdev2", 00:12:24.643 "uuid": "128b1d6b-2e3f-47c3-a1b8-7cfadb5d179e", 00:12:24.643 "is_configured": true, 00:12:24.643 "data_offset": 2048, 00:12:24.643 "data_size": 63488 00:12:24.643 }, 00:12:24.643 { 00:12:24.643 "name": "BaseBdev3", 00:12:24.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.643 "is_configured": false, 00:12:24.643 "data_offset": 0, 00:12:24.643 "data_size": 0 00:12:24.643 }, 00:12:24.643 { 00:12:24.643 "name": "BaseBdev4", 00:12:24.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.643 "is_configured": false, 00:12:24.643 "data_offset": 0, 00:12:24.643 "data_size": 0 00:12:24.643 } 00:12:24.643 ] 00:12:24.643 }' 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.643 18:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.211 [2024-12-06 18:09:37.134673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.211 BaseBdev3 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.211 [ 00:12:25.211 { 00:12:25.211 "name": "BaseBdev3", 00:12:25.211 "aliases": [ 00:12:25.211 "7ffe075a-adc2-4222-b3e2-91b1ddce3319" 00:12:25.211 ], 00:12:25.211 "product_name": "Malloc disk", 00:12:25.211 "block_size": 512, 00:12:25.211 "num_blocks": 65536, 00:12:25.211 "uuid": "7ffe075a-adc2-4222-b3e2-91b1ddce3319", 00:12:25.211 "assigned_rate_limits": { 00:12:25.211 "rw_ios_per_sec": 0, 00:12:25.211 "rw_mbytes_per_sec": 0, 00:12:25.211 "r_mbytes_per_sec": 0, 00:12:25.211 "w_mbytes_per_sec": 0 00:12:25.211 }, 00:12:25.211 "claimed": true, 00:12:25.211 "claim_type": "exclusive_write", 00:12:25.211 "zoned": false, 00:12:25.211 "supported_io_types": { 00:12:25.211 "read": true, 00:12:25.211 "write": true, 00:12:25.211 "unmap": true, 00:12:25.211 "flush": true, 00:12:25.211 "reset": true, 00:12:25.211 "nvme_admin": false, 00:12:25.211 "nvme_io": false, 00:12:25.211 "nvme_io_md": false, 00:12:25.211 "write_zeroes": true, 00:12:25.211 "zcopy": true, 00:12:25.211 "get_zone_info": false, 00:12:25.211 "zone_management": false, 00:12:25.211 "zone_append": false, 00:12:25.211 "compare": false, 00:12:25.211 "compare_and_write": false, 00:12:25.211 "abort": true, 00:12:25.211 "seek_hole": false, 00:12:25.211 "seek_data": false, 00:12:25.211 "copy": true, 00:12:25.211 "nvme_iov_md": false 00:12:25.211 }, 00:12:25.211 "memory_domains": [ 00:12:25.211 { 00:12:25.211 "dma_device_id": "system", 00:12:25.211 "dma_device_type": 1 00:12:25.211 }, 00:12:25.211 { 00:12:25.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.211 "dma_device_type": 2 00:12:25.211 } 00:12:25.211 ], 00:12:25.211 "driver_specific": {} 00:12:25.211 } 00:12:25.211 ] 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.211 "name": "Existed_Raid", 00:12:25.211 "uuid": "afd25045-83cb-4a32-adec-82549ce9ecdc", 00:12:25.211 "strip_size_kb": 0, 00:12:25.211 "state": "configuring", 00:12:25.211 "raid_level": "raid1", 00:12:25.211 "superblock": true, 00:12:25.211 "num_base_bdevs": 4, 00:12:25.211 "num_base_bdevs_discovered": 3, 00:12:25.211 "num_base_bdevs_operational": 4, 00:12:25.211 "base_bdevs_list": [ 00:12:25.211 { 00:12:25.211 "name": "BaseBdev1", 00:12:25.211 "uuid": "7f2e99b2-d8aa-47ce-95a7-12b6d7921cf3", 00:12:25.211 "is_configured": true, 00:12:25.211 "data_offset": 2048, 00:12:25.211 "data_size": 63488 00:12:25.211 }, 00:12:25.211 { 00:12:25.211 "name": "BaseBdev2", 00:12:25.211 "uuid": "128b1d6b-2e3f-47c3-a1b8-7cfadb5d179e", 00:12:25.211 "is_configured": true, 00:12:25.211 "data_offset": 2048, 00:12:25.211 "data_size": 63488 00:12:25.211 }, 00:12:25.211 { 00:12:25.211 "name": "BaseBdev3", 00:12:25.211 "uuid": "7ffe075a-adc2-4222-b3e2-91b1ddce3319", 00:12:25.211 "is_configured": true, 00:12:25.211 "data_offset": 2048, 00:12:25.211 "data_size": 63488 00:12:25.211 }, 00:12:25.211 { 00:12:25.211 "name": "BaseBdev4", 00:12:25.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.211 "is_configured": false, 00:12:25.211 "data_offset": 0, 00:12:25.211 "data_size": 0 00:12:25.211 } 00:12:25.211 ] 00:12:25.211 }' 00:12:25.211 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.212 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.470 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:25.470 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.470 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.729 [2024-12-06 18:09:37.663941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:25.729 [2024-12-06 18:09:37.664281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:25.729 [2024-12-06 18:09:37.664306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.729 [2024-12-06 18:09:37.664621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:25.729 [2024-12-06 18:09:37.664810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:25.729 [2024-12-06 18:09:37.664833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:25.729 BaseBdev4 00:12:25.729 [2024-12-06 18:09:37.665007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.729 [ 00:12:25.729 { 00:12:25.729 "name": "BaseBdev4", 00:12:25.729 "aliases": [ 00:12:25.729 "424bfc2d-7875-4111-a748-4ef0d471d34c" 00:12:25.729 ], 00:12:25.729 "product_name": "Malloc disk", 00:12:25.729 "block_size": 512, 00:12:25.729 "num_blocks": 65536, 00:12:25.729 "uuid": "424bfc2d-7875-4111-a748-4ef0d471d34c", 00:12:25.729 "assigned_rate_limits": { 00:12:25.729 "rw_ios_per_sec": 0, 00:12:25.729 "rw_mbytes_per_sec": 0, 00:12:25.729 "r_mbytes_per_sec": 0, 00:12:25.729 "w_mbytes_per_sec": 0 00:12:25.729 }, 00:12:25.729 "claimed": true, 00:12:25.729 "claim_type": "exclusive_write", 00:12:25.729 "zoned": false, 00:12:25.729 "supported_io_types": { 00:12:25.729 "read": true, 00:12:25.729 "write": true, 00:12:25.729 "unmap": true, 00:12:25.729 "flush": true, 00:12:25.729 "reset": true, 00:12:25.729 "nvme_admin": false, 00:12:25.729 "nvme_io": false, 00:12:25.729 "nvme_io_md": false, 00:12:25.729 "write_zeroes": true, 00:12:25.729 "zcopy": true, 00:12:25.729 "get_zone_info": false, 00:12:25.729 "zone_management": false, 00:12:25.729 "zone_append": false, 00:12:25.729 "compare": false, 00:12:25.729 "compare_and_write": false, 00:12:25.729 "abort": true, 00:12:25.729 "seek_hole": false, 00:12:25.729 "seek_data": false, 00:12:25.729 "copy": true, 00:12:25.729 "nvme_iov_md": false 00:12:25.729 }, 00:12:25.729 "memory_domains": [ 00:12:25.729 { 00:12:25.729 "dma_device_id": "system", 00:12:25.729 "dma_device_type": 1 00:12:25.729 }, 00:12:25.729 { 00:12:25.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.729 "dma_device_type": 2 00:12:25.729 } 00:12:25.729 ], 00:12:25.729 "driver_specific": {} 00:12:25.729 } 00:12:25.729 ] 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.729 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.730 "name": "Existed_Raid", 00:12:25.730 "uuid": "afd25045-83cb-4a32-adec-82549ce9ecdc", 00:12:25.730 "strip_size_kb": 0, 00:12:25.730 "state": "online", 00:12:25.730 "raid_level": "raid1", 00:12:25.730 "superblock": true, 00:12:25.730 "num_base_bdevs": 4, 00:12:25.730 "num_base_bdevs_discovered": 4, 00:12:25.730 "num_base_bdevs_operational": 4, 00:12:25.730 "base_bdevs_list": [ 00:12:25.730 { 00:12:25.730 "name": "BaseBdev1", 00:12:25.730 "uuid": "7f2e99b2-d8aa-47ce-95a7-12b6d7921cf3", 00:12:25.730 "is_configured": true, 00:12:25.730 "data_offset": 2048, 00:12:25.730 "data_size": 63488 00:12:25.730 }, 00:12:25.730 { 00:12:25.730 "name": "BaseBdev2", 00:12:25.730 "uuid": "128b1d6b-2e3f-47c3-a1b8-7cfadb5d179e", 00:12:25.730 "is_configured": true, 00:12:25.730 "data_offset": 2048, 00:12:25.730 "data_size": 63488 00:12:25.730 }, 00:12:25.730 { 00:12:25.730 "name": "BaseBdev3", 00:12:25.730 "uuid": "7ffe075a-adc2-4222-b3e2-91b1ddce3319", 00:12:25.730 "is_configured": true, 00:12:25.730 "data_offset": 2048, 00:12:25.730 "data_size": 63488 00:12:25.730 }, 00:12:25.730 { 00:12:25.730 "name": "BaseBdev4", 00:12:25.730 "uuid": "424bfc2d-7875-4111-a748-4ef0d471d34c", 00:12:25.730 "is_configured": true, 00:12:25.730 "data_offset": 2048, 00:12:25.730 "data_size": 63488 00:12:25.730 } 00:12:25.730 ] 00:12:25.730 }' 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.730 18:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.296 [2024-12-06 18:09:38.203569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.296 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:26.296 "name": "Existed_Raid", 00:12:26.296 "aliases": [ 00:12:26.296 "afd25045-83cb-4a32-adec-82549ce9ecdc" 00:12:26.296 ], 00:12:26.296 "product_name": "Raid Volume", 00:12:26.296 "block_size": 512, 00:12:26.296 "num_blocks": 63488, 00:12:26.296 "uuid": "afd25045-83cb-4a32-adec-82549ce9ecdc", 00:12:26.296 "assigned_rate_limits": { 00:12:26.296 "rw_ios_per_sec": 0, 00:12:26.296 "rw_mbytes_per_sec": 0, 00:12:26.296 "r_mbytes_per_sec": 0, 00:12:26.296 "w_mbytes_per_sec": 0 00:12:26.296 }, 00:12:26.296 "claimed": false, 00:12:26.296 "zoned": false, 00:12:26.296 "supported_io_types": { 00:12:26.296 "read": true, 00:12:26.296 "write": true, 00:12:26.296 "unmap": false, 00:12:26.296 "flush": false, 00:12:26.296 "reset": true, 00:12:26.296 "nvme_admin": false, 00:12:26.296 "nvme_io": false, 00:12:26.296 "nvme_io_md": false, 00:12:26.296 "write_zeroes": true, 00:12:26.296 "zcopy": false, 00:12:26.296 "get_zone_info": false, 00:12:26.296 "zone_management": false, 00:12:26.296 "zone_append": false, 00:12:26.296 "compare": false, 00:12:26.296 "compare_and_write": false, 00:12:26.296 "abort": false, 00:12:26.296 "seek_hole": false, 00:12:26.296 "seek_data": false, 00:12:26.296 "copy": false, 00:12:26.296 "nvme_iov_md": false 00:12:26.296 }, 00:12:26.296 "memory_domains": [ 00:12:26.296 { 00:12:26.296 "dma_device_id": "system", 00:12:26.296 "dma_device_type": 1 00:12:26.296 }, 00:12:26.296 { 00:12:26.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.296 "dma_device_type": 2 00:12:26.296 }, 00:12:26.296 { 00:12:26.296 "dma_device_id": "system", 00:12:26.296 "dma_device_type": 1 00:12:26.296 }, 00:12:26.296 { 00:12:26.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.296 "dma_device_type": 2 00:12:26.296 }, 00:12:26.297 { 00:12:26.297 "dma_device_id": "system", 00:12:26.297 "dma_device_type": 1 00:12:26.297 }, 00:12:26.297 { 00:12:26.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.297 "dma_device_type": 2 00:12:26.297 }, 00:12:26.297 { 00:12:26.297 "dma_device_id": "system", 00:12:26.297 "dma_device_type": 1 00:12:26.297 }, 00:12:26.297 { 00:12:26.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.297 "dma_device_type": 2 00:12:26.297 } 00:12:26.297 ], 00:12:26.297 "driver_specific": { 00:12:26.297 "raid": { 00:12:26.297 "uuid": "afd25045-83cb-4a32-adec-82549ce9ecdc", 00:12:26.297 "strip_size_kb": 0, 00:12:26.297 "state": "online", 00:12:26.297 "raid_level": "raid1", 00:12:26.297 "superblock": true, 00:12:26.297 "num_base_bdevs": 4, 00:12:26.297 "num_base_bdevs_discovered": 4, 00:12:26.297 "num_base_bdevs_operational": 4, 00:12:26.297 "base_bdevs_list": [ 00:12:26.297 { 00:12:26.297 "name": "BaseBdev1", 00:12:26.297 "uuid": "7f2e99b2-d8aa-47ce-95a7-12b6d7921cf3", 00:12:26.297 "is_configured": true, 00:12:26.297 "data_offset": 2048, 00:12:26.297 "data_size": 63488 00:12:26.297 }, 00:12:26.297 { 00:12:26.297 "name": "BaseBdev2", 00:12:26.297 "uuid": "128b1d6b-2e3f-47c3-a1b8-7cfadb5d179e", 00:12:26.297 "is_configured": true, 00:12:26.297 "data_offset": 2048, 00:12:26.297 "data_size": 63488 00:12:26.297 }, 00:12:26.297 { 00:12:26.297 "name": "BaseBdev3", 00:12:26.297 "uuid": "7ffe075a-adc2-4222-b3e2-91b1ddce3319", 00:12:26.297 "is_configured": true, 00:12:26.297 "data_offset": 2048, 00:12:26.297 "data_size": 63488 00:12:26.297 }, 00:12:26.297 { 00:12:26.297 "name": "BaseBdev4", 00:12:26.297 "uuid": "424bfc2d-7875-4111-a748-4ef0d471d34c", 00:12:26.297 "is_configured": true, 00:12:26.297 "data_offset": 2048, 00:12:26.297 "data_size": 63488 00:12:26.297 } 00:12:26.297 ] 00:12:26.297 } 00:12:26.297 } 00:12:26.297 }' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:26.297 BaseBdev2 00:12:26.297 BaseBdev3 00:12:26.297 BaseBdev4' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.297 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 [2024-12-06 18:09:38.502743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.556 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.556 "name": "Existed_Raid", 00:12:26.556 "uuid": "afd25045-83cb-4a32-adec-82549ce9ecdc", 00:12:26.556 "strip_size_kb": 0, 00:12:26.556 "state": "online", 00:12:26.556 "raid_level": "raid1", 00:12:26.556 "superblock": true, 00:12:26.556 "num_base_bdevs": 4, 00:12:26.556 "num_base_bdevs_discovered": 3, 00:12:26.556 "num_base_bdevs_operational": 3, 00:12:26.556 "base_bdevs_list": [ 00:12:26.556 { 00:12:26.556 "name": null, 00:12:26.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.556 "is_configured": false, 00:12:26.556 "data_offset": 0, 00:12:26.556 "data_size": 63488 00:12:26.556 }, 00:12:26.556 { 00:12:26.556 "name": "BaseBdev2", 00:12:26.556 "uuid": "128b1d6b-2e3f-47c3-a1b8-7cfadb5d179e", 00:12:26.556 "is_configured": true, 00:12:26.556 "data_offset": 2048, 00:12:26.556 "data_size": 63488 00:12:26.556 }, 00:12:26.556 { 00:12:26.556 "name": "BaseBdev3", 00:12:26.556 "uuid": "7ffe075a-adc2-4222-b3e2-91b1ddce3319", 00:12:26.556 "is_configured": true, 00:12:26.556 "data_offset": 2048, 00:12:26.556 "data_size": 63488 00:12:26.557 }, 00:12:26.557 { 00:12:26.557 "name": "BaseBdev4", 00:12:26.557 "uuid": "424bfc2d-7875-4111-a748-4ef0d471d34c", 00:12:26.557 "is_configured": true, 00:12:26.557 "data_offset": 2048, 00:12:26.557 "data_size": 63488 00:12:26.557 } 00:12:26.557 ] 00:12:26.557 }' 00:12:26.557 18:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.557 18:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.123 [2024-12-06 18:09:39.174428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:27.123 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.385 [2024-12-06 18:09:39.344054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:27.385 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.386 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.386 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.386 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:27.386 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:27.386 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:27.386 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.386 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.386 [2024-12-06 18:09:39.507829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:27.386 [2024-12-06 18:09:39.508036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.651 [2024-12-06 18:09:39.614440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.651 [2024-12-06 18:09:39.614507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.651 [2024-12-06 18:09:39.614519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.651 BaseBdev2 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.651 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.651 [ 00:12:27.651 { 00:12:27.652 "name": "BaseBdev2", 00:12:27.652 "aliases": [ 00:12:27.652 "45b4858e-a794-4503-8703-7fa92e47d0a7" 00:12:27.652 ], 00:12:27.652 "product_name": "Malloc disk", 00:12:27.652 "block_size": 512, 00:12:27.652 "num_blocks": 65536, 00:12:27.652 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:27.652 "assigned_rate_limits": { 00:12:27.652 "rw_ios_per_sec": 0, 00:12:27.652 "rw_mbytes_per_sec": 0, 00:12:27.652 "r_mbytes_per_sec": 0, 00:12:27.652 "w_mbytes_per_sec": 0 00:12:27.652 }, 00:12:27.652 "claimed": false, 00:12:27.652 "zoned": false, 00:12:27.652 "supported_io_types": { 00:12:27.652 "read": true, 00:12:27.652 "write": true, 00:12:27.652 "unmap": true, 00:12:27.652 "flush": true, 00:12:27.652 "reset": true, 00:12:27.652 "nvme_admin": false, 00:12:27.652 "nvme_io": false, 00:12:27.652 "nvme_io_md": false, 00:12:27.652 "write_zeroes": true, 00:12:27.652 "zcopy": true, 00:12:27.652 "get_zone_info": false, 00:12:27.652 "zone_management": false, 00:12:27.652 "zone_append": false, 00:12:27.652 "compare": false, 00:12:27.652 "compare_and_write": false, 00:12:27.652 "abort": true, 00:12:27.652 "seek_hole": false, 00:12:27.652 "seek_data": false, 00:12:27.652 "copy": true, 00:12:27.652 "nvme_iov_md": false 00:12:27.652 }, 00:12:27.652 "memory_domains": [ 00:12:27.652 { 00:12:27.652 "dma_device_id": "system", 00:12:27.652 "dma_device_type": 1 00:12:27.652 }, 00:12:27.652 { 00:12:27.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.652 "dma_device_type": 2 00:12:27.652 } 00:12:27.652 ], 00:12:27.652 "driver_specific": {} 00:12:27.652 } 00:12:27.652 ] 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.652 BaseBdev3 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.652 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 [ 00:12:27.912 { 00:12:27.912 "name": "BaseBdev3", 00:12:27.912 "aliases": [ 00:12:27.912 "9ad5077a-972a-446a-9b55-f764faf63150" 00:12:27.912 ], 00:12:27.912 "product_name": "Malloc disk", 00:12:27.912 "block_size": 512, 00:12:27.912 "num_blocks": 65536, 00:12:27.912 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:27.912 "assigned_rate_limits": { 00:12:27.912 "rw_ios_per_sec": 0, 00:12:27.912 "rw_mbytes_per_sec": 0, 00:12:27.912 "r_mbytes_per_sec": 0, 00:12:27.912 "w_mbytes_per_sec": 0 00:12:27.912 }, 00:12:27.912 "claimed": false, 00:12:27.912 "zoned": false, 00:12:27.912 "supported_io_types": { 00:12:27.912 "read": true, 00:12:27.912 "write": true, 00:12:27.912 "unmap": true, 00:12:27.912 "flush": true, 00:12:27.912 "reset": true, 00:12:27.912 "nvme_admin": false, 00:12:27.912 "nvme_io": false, 00:12:27.912 "nvme_io_md": false, 00:12:27.912 "write_zeroes": true, 00:12:27.912 "zcopy": true, 00:12:27.912 "get_zone_info": false, 00:12:27.912 "zone_management": false, 00:12:27.912 "zone_append": false, 00:12:27.912 "compare": false, 00:12:27.912 "compare_and_write": false, 00:12:27.912 "abort": true, 00:12:27.912 "seek_hole": false, 00:12:27.912 "seek_data": false, 00:12:27.912 "copy": true, 00:12:27.912 "nvme_iov_md": false 00:12:27.912 }, 00:12:27.912 "memory_domains": [ 00:12:27.912 { 00:12:27.912 "dma_device_id": "system", 00:12:27.912 "dma_device_type": 1 00:12:27.912 }, 00:12:27.912 { 00:12:27.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.912 "dma_device_type": 2 00:12:27.912 } 00:12:27.912 ], 00:12:27.912 "driver_specific": {} 00:12:27.912 } 00:12:27.912 ] 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 BaseBdev4 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 [ 00:12:27.912 { 00:12:27.912 "name": "BaseBdev4", 00:12:27.912 "aliases": [ 00:12:27.912 "3b61484b-4748-4783-bb79-835f07760284" 00:12:27.912 ], 00:12:27.912 "product_name": "Malloc disk", 00:12:27.912 "block_size": 512, 00:12:27.912 "num_blocks": 65536, 00:12:27.912 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:27.912 "assigned_rate_limits": { 00:12:27.912 "rw_ios_per_sec": 0, 00:12:27.912 "rw_mbytes_per_sec": 0, 00:12:27.912 "r_mbytes_per_sec": 0, 00:12:27.912 "w_mbytes_per_sec": 0 00:12:27.912 }, 00:12:27.912 "claimed": false, 00:12:27.912 "zoned": false, 00:12:27.912 "supported_io_types": { 00:12:27.912 "read": true, 00:12:27.912 "write": true, 00:12:27.912 "unmap": true, 00:12:27.912 "flush": true, 00:12:27.912 "reset": true, 00:12:27.912 "nvme_admin": false, 00:12:27.912 "nvme_io": false, 00:12:27.912 "nvme_io_md": false, 00:12:27.912 "write_zeroes": true, 00:12:27.912 "zcopy": true, 00:12:27.912 "get_zone_info": false, 00:12:27.912 "zone_management": false, 00:12:27.912 "zone_append": false, 00:12:27.912 "compare": false, 00:12:27.912 "compare_and_write": false, 00:12:27.912 "abort": true, 00:12:27.912 "seek_hole": false, 00:12:27.912 "seek_data": false, 00:12:27.912 "copy": true, 00:12:27.912 "nvme_iov_md": false 00:12:27.912 }, 00:12:27.912 "memory_domains": [ 00:12:27.912 { 00:12:27.912 "dma_device_id": "system", 00:12:27.912 "dma_device_type": 1 00:12:27.912 }, 00:12:27.912 { 00:12:27.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.912 "dma_device_type": 2 00:12:27.912 } 00:12:27.912 ], 00:12:27.912 "driver_specific": {} 00:12:27.912 } 00:12:27.912 ] 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 [2024-12-06 18:09:39.930289] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:27.912 [2024-12-06 18:09:39.930396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:27.912 [2024-12-06 18:09:39.930444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.912 [2024-12-06 18:09:39.932591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.912 [2024-12-06 18:09:39.932690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.912 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.913 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.913 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.913 "name": "Existed_Raid", 00:12:27.913 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:27.913 "strip_size_kb": 0, 00:12:27.913 "state": "configuring", 00:12:27.913 "raid_level": "raid1", 00:12:27.913 "superblock": true, 00:12:27.913 "num_base_bdevs": 4, 00:12:27.913 "num_base_bdevs_discovered": 3, 00:12:27.913 "num_base_bdevs_operational": 4, 00:12:27.913 "base_bdevs_list": [ 00:12:27.913 { 00:12:27.913 "name": "BaseBdev1", 00:12:27.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.913 "is_configured": false, 00:12:27.913 "data_offset": 0, 00:12:27.913 "data_size": 0 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "name": "BaseBdev2", 00:12:27.913 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:27.913 "is_configured": true, 00:12:27.913 "data_offset": 2048, 00:12:27.913 "data_size": 63488 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "name": "BaseBdev3", 00:12:27.913 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:27.913 "is_configured": true, 00:12:27.913 "data_offset": 2048, 00:12:27.913 "data_size": 63488 00:12:27.913 }, 00:12:27.913 { 00:12:27.913 "name": "BaseBdev4", 00:12:27.913 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:27.913 "is_configured": true, 00:12:27.913 "data_offset": 2048, 00:12:27.913 "data_size": 63488 00:12:27.913 } 00:12:27.913 ] 00:12:27.913 }' 00:12:27.913 18:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.913 18:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.481 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.482 [2024-12-06 18:09:40.397528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.482 "name": "Existed_Raid", 00:12:28.482 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:28.482 "strip_size_kb": 0, 00:12:28.482 "state": "configuring", 00:12:28.482 "raid_level": "raid1", 00:12:28.482 "superblock": true, 00:12:28.482 "num_base_bdevs": 4, 00:12:28.482 "num_base_bdevs_discovered": 2, 00:12:28.482 "num_base_bdevs_operational": 4, 00:12:28.482 "base_bdevs_list": [ 00:12:28.482 { 00:12:28.482 "name": "BaseBdev1", 00:12:28.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.482 "is_configured": false, 00:12:28.482 "data_offset": 0, 00:12:28.482 "data_size": 0 00:12:28.482 }, 00:12:28.482 { 00:12:28.482 "name": null, 00:12:28.482 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:28.482 "is_configured": false, 00:12:28.482 "data_offset": 0, 00:12:28.482 "data_size": 63488 00:12:28.482 }, 00:12:28.482 { 00:12:28.482 "name": "BaseBdev3", 00:12:28.482 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:28.482 "is_configured": true, 00:12:28.482 "data_offset": 2048, 00:12:28.482 "data_size": 63488 00:12:28.482 }, 00:12:28.482 { 00:12:28.482 "name": "BaseBdev4", 00:12:28.482 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:28.482 "is_configured": true, 00:12:28.482 "data_offset": 2048, 00:12:28.482 "data_size": 63488 00:12:28.482 } 00:12:28.482 ] 00:12:28.482 }' 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.482 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.741 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.741 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:28.741 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.741 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.741 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.741 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:28.741 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:28.741 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.741 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.000 [2024-12-06 18:09:40.940779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.001 BaseBdev1 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.001 [ 00:12:29.001 { 00:12:29.001 "name": "BaseBdev1", 00:12:29.001 "aliases": [ 00:12:29.001 "6ab43c57-1937-481c-8b52-577b45753224" 00:12:29.001 ], 00:12:29.001 "product_name": "Malloc disk", 00:12:29.001 "block_size": 512, 00:12:29.001 "num_blocks": 65536, 00:12:29.001 "uuid": "6ab43c57-1937-481c-8b52-577b45753224", 00:12:29.001 "assigned_rate_limits": { 00:12:29.001 "rw_ios_per_sec": 0, 00:12:29.001 "rw_mbytes_per_sec": 0, 00:12:29.001 "r_mbytes_per_sec": 0, 00:12:29.001 "w_mbytes_per_sec": 0 00:12:29.001 }, 00:12:29.001 "claimed": true, 00:12:29.001 "claim_type": "exclusive_write", 00:12:29.001 "zoned": false, 00:12:29.001 "supported_io_types": { 00:12:29.001 "read": true, 00:12:29.001 "write": true, 00:12:29.001 "unmap": true, 00:12:29.001 "flush": true, 00:12:29.001 "reset": true, 00:12:29.001 "nvme_admin": false, 00:12:29.001 "nvme_io": false, 00:12:29.001 "nvme_io_md": false, 00:12:29.001 "write_zeroes": true, 00:12:29.001 "zcopy": true, 00:12:29.001 "get_zone_info": false, 00:12:29.001 "zone_management": false, 00:12:29.001 "zone_append": false, 00:12:29.001 "compare": false, 00:12:29.001 "compare_and_write": false, 00:12:29.001 "abort": true, 00:12:29.001 "seek_hole": false, 00:12:29.001 "seek_data": false, 00:12:29.001 "copy": true, 00:12:29.001 "nvme_iov_md": false 00:12:29.001 }, 00:12:29.001 "memory_domains": [ 00:12:29.001 { 00:12:29.001 "dma_device_id": "system", 00:12:29.001 "dma_device_type": 1 00:12:29.001 }, 00:12:29.001 { 00:12:29.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.001 "dma_device_type": 2 00:12:29.001 } 00:12:29.001 ], 00:12:29.001 "driver_specific": {} 00:12:29.001 } 00:12:29.001 ] 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.001 18:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.001 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.001 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.001 "name": "Existed_Raid", 00:12:29.001 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:29.001 "strip_size_kb": 0, 00:12:29.001 "state": "configuring", 00:12:29.001 "raid_level": "raid1", 00:12:29.001 "superblock": true, 00:12:29.001 "num_base_bdevs": 4, 00:12:29.001 "num_base_bdevs_discovered": 3, 00:12:29.001 "num_base_bdevs_operational": 4, 00:12:29.001 "base_bdevs_list": [ 00:12:29.001 { 00:12:29.001 "name": "BaseBdev1", 00:12:29.001 "uuid": "6ab43c57-1937-481c-8b52-577b45753224", 00:12:29.001 "is_configured": true, 00:12:29.001 "data_offset": 2048, 00:12:29.001 "data_size": 63488 00:12:29.001 }, 00:12:29.001 { 00:12:29.001 "name": null, 00:12:29.001 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:29.001 "is_configured": false, 00:12:29.001 "data_offset": 0, 00:12:29.001 "data_size": 63488 00:12:29.001 }, 00:12:29.001 { 00:12:29.001 "name": "BaseBdev3", 00:12:29.001 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:29.001 "is_configured": true, 00:12:29.001 "data_offset": 2048, 00:12:29.001 "data_size": 63488 00:12:29.001 }, 00:12:29.001 { 00:12:29.001 "name": "BaseBdev4", 00:12:29.001 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:29.001 "is_configured": true, 00:12:29.001 "data_offset": 2048, 00:12:29.001 "data_size": 63488 00:12:29.001 } 00:12:29.001 ] 00:12:29.001 }' 00:12:29.001 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.001 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.260 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.260 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.260 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.261 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:29.261 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.520 [2024-12-06 18:09:41.448084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.520 "name": "Existed_Raid", 00:12:29.520 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:29.520 "strip_size_kb": 0, 00:12:29.520 "state": "configuring", 00:12:29.520 "raid_level": "raid1", 00:12:29.520 "superblock": true, 00:12:29.520 "num_base_bdevs": 4, 00:12:29.520 "num_base_bdevs_discovered": 2, 00:12:29.520 "num_base_bdevs_operational": 4, 00:12:29.520 "base_bdevs_list": [ 00:12:29.520 { 00:12:29.520 "name": "BaseBdev1", 00:12:29.520 "uuid": "6ab43c57-1937-481c-8b52-577b45753224", 00:12:29.520 "is_configured": true, 00:12:29.520 "data_offset": 2048, 00:12:29.520 "data_size": 63488 00:12:29.520 }, 00:12:29.520 { 00:12:29.520 "name": null, 00:12:29.520 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:29.520 "is_configured": false, 00:12:29.520 "data_offset": 0, 00:12:29.520 "data_size": 63488 00:12:29.520 }, 00:12:29.520 { 00:12:29.520 "name": null, 00:12:29.520 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:29.520 "is_configured": false, 00:12:29.520 "data_offset": 0, 00:12:29.520 "data_size": 63488 00:12:29.520 }, 00:12:29.520 { 00:12:29.520 "name": "BaseBdev4", 00:12:29.520 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:29.520 "is_configured": true, 00:12:29.520 "data_offset": 2048, 00:12:29.520 "data_size": 63488 00:12:29.520 } 00:12:29.520 ] 00:12:29.520 }' 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.520 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.779 [2024-12-06 18:09:41.923270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.779 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.039 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.039 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.039 "name": "Existed_Raid", 00:12:30.039 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:30.039 "strip_size_kb": 0, 00:12:30.039 "state": "configuring", 00:12:30.039 "raid_level": "raid1", 00:12:30.039 "superblock": true, 00:12:30.039 "num_base_bdevs": 4, 00:12:30.039 "num_base_bdevs_discovered": 3, 00:12:30.039 "num_base_bdevs_operational": 4, 00:12:30.039 "base_bdevs_list": [ 00:12:30.039 { 00:12:30.039 "name": "BaseBdev1", 00:12:30.039 "uuid": "6ab43c57-1937-481c-8b52-577b45753224", 00:12:30.039 "is_configured": true, 00:12:30.039 "data_offset": 2048, 00:12:30.039 "data_size": 63488 00:12:30.039 }, 00:12:30.039 { 00:12:30.039 "name": null, 00:12:30.039 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:30.039 "is_configured": false, 00:12:30.039 "data_offset": 0, 00:12:30.039 "data_size": 63488 00:12:30.039 }, 00:12:30.039 { 00:12:30.039 "name": "BaseBdev3", 00:12:30.039 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:30.039 "is_configured": true, 00:12:30.039 "data_offset": 2048, 00:12:30.039 "data_size": 63488 00:12:30.039 }, 00:12:30.039 { 00:12:30.039 "name": "BaseBdev4", 00:12:30.039 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:30.039 "is_configured": true, 00:12:30.039 "data_offset": 2048, 00:12:30.039 "data_size": 63488 00:12:30.039 } 00:12:30.039 ] 00:12:30.039 }' 00:12:30.039 18:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.039 18:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.298 [2024-12-06 18:09:42.362604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.298 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.299 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.299 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.299 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.299 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.299 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.557 "name": "Existed_Raid", 00:12:30.557 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:30.557 "strip_size_kb": 0, 00:12:30.557 "state": "configuring", 00:12:30.557 "raid_level": "raid1", 00:12:30.557 "superblock": true, 00:12:30.557 "num_base_bdevs": 4, 00:12:30.557 "num_base_bdevs_discovered": 2, 00:12:30.557 "num_base_bdevs_operational": 4, 00:12:30.557 "base_bdevs_list": [ 00:12:30.557 { 00:12:30.557 "name": null, 00:12:30.557 "uuid": "6ab43c57-1937-481c-8b52-577b45753224", 00:12:30.557 "is_configured": false, 00:12:30.557 "data_offset": 0, 00:12:30.557 "data_size": 63488 00:12:30.557 }, 00:12:30.557 { 00:12:30.557 "name": null, 00:12:30.557 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:30.557 "is_configured": false, 00:12:30.557 "data_offset": 0, 00:12:30.557 "data_size": 63488 00:12:30.557 }, 00:12:30.557 { 00:12:30.557 "name": "BaseBdev3", 00:12:30.557 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:30.557 "is_configured": true, 00:12:30.557 "data_offset": 2048, 00:12:30.557 "data_size": 63488 00:12:30.557 }, 00:12:30.557 { 00:12:30.557 "name": "BaseBdev4", 00:12:30.557 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:30.557 "is_configured": true, 00:12:30.557 "data_offset": 2048, 00:12:30.557 "data_size": 63488 00:12:30.557 } 00:12:30.557 ] 00:12:30.557 }' 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.557 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.816 [2024-12-06 18:09:42.971489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.816 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.075 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.075 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.075 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.075 18:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.075 18:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.075 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.075 "name": "Existed_Raid", 00:12:31.075 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:31.075 "strip_size_kb": 0, 00:12:31.075 "state": "configuring", 00:12:31.075 "raid_level": "raid1", 00:12:31.075 "superblock": true, 00:12:31.075 "num_base_bdevs": 4, 00:12:31.075 "num_base_bdevs_discovered": 3, 00:12:31.075 "num_base_bdevs_operational": 4, 00:12:31.075 "base_bdevs_list": [ 00:12:31.075 { 00:12:31.075 "name": null, 00:12:31.075 "uuid": "6ab43c57-1937-481c-8b52-577b45753224", 00:12:31.075 "is_configured": false, 00:12:31.075 "data_offset": 0, 00:12:31.075 "data_size": 63488 00:12:31.075 }, 00:12:31.075 { 00:12:31.075 "name": "BaseBdev2", 00:12:31.075 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:31.075 "is_configured": true, 00:12:31.075 "data_offset": 2048, 00:12:31.075 "data_size": 63488 00:12:31.075 }, 00:12:31.075 { 00:12:31.075 "name": "BaseBdev3", 00:12:31.075 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:31.075 "is_configured": true, 00:12:31.075 "data_offset": 2048, 00:12:31.075 "data_size": 63488 00:12:31.075 }, 00:12:31.075 { 00:12:31.075 "name": "BaseBdev4", 00:12:31.075 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:31.075 "is_configured": true, 00:12:31.075 "data_offset": 2048, 00:12:31.075 "data_size": 63488 00:12:31.075 } 00:12:31.075 ] 00:12:31.075 }' 00:12:31.075 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.075 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.334 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.334 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:31.334 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.334 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.334 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.334 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:31.335 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.335 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.335 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.335 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6ab43c57-1937-481c-8b52-577b45753224 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.594 [2024-12-06 18:09:43.577331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:31.594 [2024-12-06 18:09:43.577582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:31.594 [2024-12-06 18:09:43.577601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.594 [2024-12-06 18:09:43.577867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:31.594 [2024-12-06 18:09:43.578040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:31.594 [2024-12-06 18:09:43.578051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:31.594 [2024-12-06 18:09:43.578212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.594 NewBaseBdev 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.594 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.594 [ 00:12:31.594 { 00:12:31.594 "name": "NewBaseBdev", 00:12:31.594 "aliases": [ 00:12:31.594 "6ab43c57-1937-481c-8b52-577b45753224" 00:12:31.594 ], 00:12:31.594 "product_name": "Malloc disk", 00:12:31.594 "block_size": 512, 00:12:31.594 "num_blocks": 65536, 00:12:31.594 "uuid": "6ab43c57-1937-481c-8b52-577b45753224", 00:12:31.594 "assigned_rate_limits": { 00:12:31.594 "rw_ios_per_sec": 0, 00:12:31.594 "rw_mbytes_per_sec": 0, 00:12:31.594 "r_mbytes_per_sec": 0, 00:12:31.594 "w_mbytes_per_sec": 0 00:12:31.594 }, 00:12:31.594 "claimed": true, 00:12:31.594 "claim_type": "exclusive_write", 00:12:31.594 "zoned": false, 00:12:31.594 "supported_io_types": { 00:12:31.594 "read": true, 00:12:31.594 "write": true, 00:12:31.594 "unmap": true, 00:12:31.594 "flush": true, 00:12:31.594 "reset": true, 00:12:31.594 "nvme_admin": false, 00:12:31.594 "nvme_io": false, 00:12:31.594 "nvme_io_md": false, 00:12:31.594 "write_zeroes": true, 00:12:31.594 "zcopy": true, 00:12:31.594 "get_zone_info": false, 00:12:31.594 "zone_management": false, 00:12:31.594 "zone_append": false, 00:12:31.594 "compare": false, 00:12:31.594 "compare_and_write": false, 00:12:31.594 "abort": true, 00:12:31.594 "seek_hole": false, 00:12:31.595 "seek_data": false, 00:12:31.595 "copy": true, 00:12:31.595 "nvme_iov_md": false 00:12:31.595 }, 00:12:31.595 "memory_domains": [ 00:12:31.595 { 00:12:31.595 "dma_device_id": "system", 00:12:31.595 "dma_device_type": 1 00:12:31.595 }, 00:12:31.595 { 00:12:31.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.595 "dma_device_type": 2 00:12:31.595 } 00:12:31.595 ], 00:12:31.595 "driver_specific": {} 00:12:31.595 } 00:12:31.595 ] 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.595 "name": "Existed_Raid", 00:12:31.595 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:31.595 "strip_size_kb": 0, 00:12:31.595 "state": "online", 00:12:31.595 "raid_level": "raid1", 00:12:31.595 "superblock": true, 00:12:31.595 "num_base_bdevs": 4, 00:12:31.595 "num_base_bdevs_discovered": 4, 00:12:31.595 "num_base_bdevs_operational": 4, 00:12:31.595 "base_bdevs_list": [ 00:12:31.595 { 00:12:31.595 "name": "NewBaseBdev", 00:12:31.595 "uuid": "6ab43c57-1937-481c-8b52-577b45753224", 00:12:31.595 "is_configured": true, 00:12:31.595 "data_offset": 2048, 00:12:31.595 "data_size": 63488 00:12:31.595 }, 00:12:31.595 { 00:12:31.595 "name": "BaseBdev2", 00:12:31.595 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:31.595 "is_configured": true, 00:12:31.595 "data_offset": 2048, 00:12:31.595 "data_size": 63488 00:12:31.595 }, 00:12:31.595 { 00:12:31.595 "name": "BaseBdev3", 00:12:31.595 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:31.595 "is_configured": true, 00:12:31.595 "data_offset": 2048, 00:12:31.595 "data_size": 63488 00:12:31.595 }, 00:12:31.595 { 00:12:31.595 "name": "BaseBdev4", 00:12:31.595 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:31.595 "is_configured": true, 00:12:31.595 "data_offset": 2048, 00:12:31.595 "data_size": 63488 00:12:31.595 } 00:12:31.595 ] 00:12:31.595 }' 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.595 18:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.166 [2024-12-06 18:09:44.112918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:32.166 "name": "Existed_Raid", 00:12:32.166 "aliases": [ 00:12:32.166 "64885272-000f-4770-8489-71cc963b3e07" 00:12:32.166 ], 00:12:32.166 "product_name": "Raid Volume", 00:12:32.166 "block_size": 512, 00:12:32.166 "num_blocks": 63488, 00:12:32.166 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:32.166 "assigned_rate_limits": { 00:12:32.166 "rw_ios_per_sec": 0, 00:12:32.166 "rw_mbytes_per_sec": 0, 00:12:32.166 "r_mbytes_per_sec": 0, 00:12:32.166 "w_mbytes_per_sec": 0 00:12:32.166 }, 00:12:32.166 "claimed": false, 00:12:32.166 "zoned": false, 00:12:32.166 "supported_io_types": { 00:12:32.166 "read": true, 00:12:32.166 "write": true, 00:12:32.166 "unmap": false, 00:12:32.166 "flush": false, 00:12:32.166 "reset": true, 00:12:32.166 "nvme_admin": false, 00:12:32.166 "nvme_io": false, 00:12:32.166 "nvme_io_md": false, 00:12:32.166 "write_zeroes": true, 00:12:32.166 "zcopy": false, 00:12:32.166 "get_zone_info": false, 00:12:32.166 "zone_management": false, 00:12:32.166 "zone_append": false, 00:12:32.166 "compare": false, 00:12:32.166 "compare_and_write": false, 00:12:32.166 "abort": false, 00:12:32.166 "seek_hole": false, 00:12:32.166 "seek_data": false, 00:12:32.166 "copy": false, 00:12:32.166 "nvme_iov_md": false 00:12:32.166 }, 00:12:32.166 "memory_domains": [ 00:12:32.166 { 00:12:32.166 "dma_device_id": "system", 00:12:32.166 "dma_device_type": 1 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.166 "dma_device_type": 2 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "dma_device_id": "system", 00:12:32.166 "dma_device_type": 1 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.166 "dma_device_type": 2 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "dma_device_id": "system", 00:12:32.166 "dma_device_type": 1 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.166 "dma_device_type": 2 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "dma_device_id": "system", 00:12:32.166 "dma_device_type": 1 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.166 "dma_device_type": 2 00:12:32.166 } 00:12:32.166 ], 00:12:32.166 "driver_specific": { 00:12:32.166 "raid": { 00:12:32.166 "uuid": "64885272-000f-4770-8489-71cc963b3e07", 00:12:32.166 "strip_size_kb": 0, 00:12:32.166 "state": "online", 00:12:32.166 "raid_level": "raid1", 00:12:32.166 "superblock": true, 00:12:32.166 "num_base_bdevs": 4, 00:12:32.166 "num_base_bdevs_discovered": 4, 00:12:32.166 "num_base_bdevs_operational": 4, 00:12:32.166 "base_bdevs_list": [ 00:12:32.166 { 00:12:32.166 "name": "NewBaseBdev", 00:12:32.166 "uuid": "6ab43c57-1937-481c-8b52-577b45753224", 00:12:32.166 "is_configured": true, 00:12:32.166 "data_offset": 2048, 00:12:32.166 "data_size": 63488 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "name": "BaseBdev2", 00:12:32.166 "uuid": "45b4858e-a794-4503-8703-7fa92e47d0a7", 00:12:32.166 "is_configured": true, 00:12:32.166 "data_offset": 2048, 00:12:32.166 "data_size": 63488 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "name": "BaseBdev3", 00:12:32.166 "uuid": "9ad5077a-972a-446a-9b55-f764faf63150", 00:12:32.166 "is_configured": true, 00:12:32.166 "data_offset": 2048, 00:12:32.166 "data_size": 63488 00:12:32.166 }, 00:12:32.166 { 00:12:32.166 "name": "BaseBdev4", 00:12:32.166 "uuid": "3b61484b-4748-4783-bb79-835f07760284", 00:12:32.166 "is_configured": true, 00:12:32.166 "data_offset": 2048, 00:12:32.166 "data_size": 63488 00:12:32.166 } 00:12:32.166 ] 00:12:32.166 } 00:12:32.166 } 00:12:32.166 }' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:32.166 BaseBdev2 00:12:32.166 BaseBdev3 00:12:32.166 BaseBdev4' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.166 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.434 [2024-12-06 18:09:44.435983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.434 [2024-12-06 18:09:44.436013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.434 [2024-12-06 18:09:44.436113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.434 [2024-12-06 18:09:44.436442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.434 [2024-12-06 18:09:44.436464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74348 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74348 ']' 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74348 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74348 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.434 killing process with pid 74348 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74348' 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74348 00:12:32.434 [2024-12-06 18:09:44.473450] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.434 18:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74348 00:12:33.003 [2024-12-06 18:09:44.895011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.939 18:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:33.939 00:12:33.939 real 0m12.025s 00:12:33.939 user 0m19.083s 00:12:33.939 sys 0m2.123s 00:12:33.939 18:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.939 ************************************ 00:12:33.939 END TEST raid_state_function_test_sb 00:12:33.939 ************************************ 00:12:33.939 18:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.199 18:09:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:34.199 18:09:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:34.199 18:09:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.199 18:09:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.199 ************************************ 00:12:34.199 START TEST raid_superblock_test 00:12:34.199 ************************************ 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75018 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75018 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75018 ']' 00:12:34.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.199 18:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.199 [2024-12-06 18:09:46.235872] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:12:34.199 [2024-12-06 18:09:46.236084] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75018 ] 00:12:34.457 [2024-12-06 18:09:46.412026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.457 [2024-12-06 18:09:46.536870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.715 [2024-12-06 18:09:46.766666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.715 [2024-12-06 18:09:46.766760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.973 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 malloc1 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 [2024-12-06 18:09:47.152479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:35.232 [2024-12-06 18:09:47.152602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.232 [2024-12-06 18:09:47.152662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:35.232 [2024-12-06 18:09:47.152692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.232 [2024-12-06 18:09:47.154876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.232 [2024-12-06 18:09:47.154954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:35.232 pt1 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 malloc2 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 [2024-12-06 18:09:47.212387] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:35.232 [2024-12-06 18:09:47.212507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.232 [2024-12-06 18:09:47.212556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:35.232 [2024-12-06 18:09:47.212567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.232 [2024-12-06 18:09:47.214836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.232 [2024-12-06 18:09:47.214875] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:35.232 pt2 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 malloc3 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 [2024-12-06 18:09:47.289629] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:35.232 [2024-12-06 18:09:47.289741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.232 [2024-12-06 18:09:47.289788] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:35.232 [2024-12-06 18:09:47.289835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.232 [2024-12-06 18:09:47.292347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.232 [2024-12-06 18:09:47.292435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:35.232 pt3 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 malloc4 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 [2024-12-06 18:09:47.353116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:35.232 [2024-12-06 18:09:47.353231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.232 [2024-12-06 18:09:47.353276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:35.232 [2024-12-06 18:09:47.353348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.232 [2024-12-06 18:09:47.355710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.232 [2024-12-06 18:09:47.355788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:35.232 pt4 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 [2024-12-06 18:09:47.365126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:35.232 [2024-12-06 18:09:47.367118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:35.232 [2024-12-06 18:09:47.367189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:35.232 [2024-12-06 18:09:47.367259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:35.232 [2024-12-06 18:09:47.367507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:35.232 [2024-12-06 18:09:47.367533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:35.232 [2024-12-06 18:09:47.367838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:35.232 [2024-12-06 18:09:47.368045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:35.232 [2024-12-06 18:09:47.368078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:35.232 [2024-12-06 18:09:47.368278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.232 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.233 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.233 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.233 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.233 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.233 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.233 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.233 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.233 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.233 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.491 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.491 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.491 "name": "raid_bdev1", 00:12:35.491 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:35.491 "strip_size_kb": 0, 00:12:35.491 "state": "online", 00:12:35.491 "raid_level": "raid1", 00:12:35.491 "superblock": true, 00:12:35.491 "num_base_bdevs": 4, 00:12:35.491 "num_base_bdevs_discovered": 4, 00:12:35.491 "num_base_bdevs_operational": 4, 00:12:35.491 "base_bdevs_list": [ 00:12:35.491 { 00:12:35.491 "name": "pt1", 00:12:35.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:35.491 "is_configured": true, 00:12:35.491 "data_offset": 2048, 00:12:35.491 "data_size": 63488 00:12:35.491 }, 00:12:35.491 { 00:12:35.491 "name": "pt2", 00:12:35.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.491 "is_configured": true, 00:12:35.491 "data_offset": 2048, 00:12:35.491 "data_size": 63488 00:12:35.491 }, 00:12:35.491 { 00:12:35.491 "name": "pt3", 00:12:35.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.491 "is_configured": true, 00:12:35.491 "data_offset": 2048, 00:12:35.491 "data_size": 63488 00:12:35.491 }, 00:12:35.491 { 00:12:35.491 "name": "pt4", 00:12:35.491 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:35.491 "is_configured": true, 00:12:35.491 "data_offset": 2048, 00:12:35.491 "data_size": 63488 00:12:35.491 } 00:12:35.491 ] 00:12:35.491 }' 00:12:35.491 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.491 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.754 [2024-12-06 18:09:47.848670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:35.754 "name": "raid_bdev1", 00:12:35.754 "aliases": [ 00:12:35.754 "d73d6be6-89e5-4cff-ade3-8a765426e60e" 00:12:35.754 ], 00:12:35.754 "product_name": "Raid Volume", 00:12:35.754 "block_size": 512, 00:12:35.754 "num_blocks": 63488, 00:12:35.754 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:35.754 "assigned_rate_limits": { 00:12:35.754 "rw_ios_per_sec": 0, 00:12:35.754 "rw_mbytes_per_sec": 0, 00:12:35.754 "r_mbytes_per_sec": 0, 00:12:35.754 "w_mbytes_per_sec": 0 00:12:35.754 }, 00:12:35.754 "claimed": false, 00:12:35.754 "zoned": false, 00:12:35.754 "supported_io_types": { 00:12:35.754 "read": true, 00:12:35.754 "write": true, 00:12:35.754 "unmap": false, 00:12:35.754 "flush": false, 00:12:35.754 "reset": true, 00:12:35.754 "nvme_admin": false, 00:12:35.754 "nvme_io": false, 00:12:35.754 "nvme_io_md": false, 00:12:35.754 "write_zeroes": true, 00:12:35.754 "zcopy": false, 00:12:35.754 "get_zone_info": false, 00:12:35.754 "zone_management": false, 00:12:35.754 "zone_append": false, 00:12:35.754 "compare": false, 00:12:35.754 "compare_and_write": false, 00:12:35.754 "abort": false, 00:12:35.754 "seek_hole": false, 00:12:35.754 "seek_data": false, 00:12:35.754 "copy": false, 00:12:35.754 "nvme_iov_md": false 00:12:35.754 }, 00:12:35.754 "memory_domains": [ 00:12:35.754 { 00:12:35.754 "dma_device_id": "system", 00:12:35.754 "dma_device_type": 1 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.754 "dma_device_type": 2 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "dma_device_id": "system", 00:12:35.754 "dma_device_type": 1 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.754 "dma_device_type": 2 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "dma_device_id": "system", 00:12:35.754 "dma_device_type": 1 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.754 "dma_device_type": 2 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "dma_device_id": "system", 00:12:35.754 "dma_device_type": 1 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.754 "dma_device_type": 2 00:12:35.754 } 00:12:35.754 ], 00:12:35.754 "driver_specific": { 00:12:35.754 "raid": { 00:12:35.754 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:35.754 "strip_size_kb": 0, 00:12:35.754 "state": "online", 00:12:35.754 "raid_level": "raid1", 00:12:35.754 "superblock": true, 00:12:35.754 "num_base_bdevs": 4, 00:12:35.754 "num_base_bdevs_discovered": 4, 00:12:35.754 "num_base_bdevs_operational": 4, 00:12:35.754 "base_bdevs_list": [ 00:12:35.754 { 00:12:35.754 "name": "pt1", 00:12:35.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:35.754 "is_configured": true, 00:12:35.754 "data_offset": 2048, 00:12:35.754 "data_size": 63488 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "name": "pt2", 00:12:35.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.754 "is_configured": true, 00:12:35.754 "data_offset": 2048, 00:12:35.754 "data_size": 63488 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "name": "pt3", 00:12:35.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.754 "is_configured": true, 00:12:35.754 "data_offset": 2048, 00:12:35.754 "data_size": 63488 00:12:35.754 }, 00:12:35.754 { 00:12:35.754 "name": "pt4", 00:12:35.754 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:35.754 "is_configured": true, 00:12:35.754 "data_offset": 2048, 00:12:35.754 "data_size": 63488 00:12:35.754 } 00:12:35.754 ] 00:12:35.754 } 00:12:35.754 } 00:12:35.754 }' 00:12:35.754 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.012 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:36.012 pt2 00:12:36.012 pt3 00:12:36.012 pt4' 00:12:36.012 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.012 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:36.012 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.012 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.012 18:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:36.012 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.012 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.012 18:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.012 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.012 [2024-12-06 18:09:48.172088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.271 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.271 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d73d6be6-89e5-4cff-ade3-8a765426e60e 00:12:36.271 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d73d6be6-89e5-4cff-ade3-8a765426e60e ']' 00:12:36.271 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:36.271 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.271 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.271 [2024-12-06 18:09:48.203689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.271 [2024-12-06 18:09:48.203723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.272 [2024-12-06 18:09:48.203821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.272 [2024-12-06 18:09:48.203921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.272 [2024-12-06 18:09:48.203954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.272 [2024-12-06 18:09:48.367473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:36.272 [2024-12-06 18:09:48.369534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:36.272 [2024-12-06 18:09:48.369594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:36.272 [2024-12-06 18:09:48.369635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:36.272 [2024-12-06 18:09:48.369690] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:36.272 [2024-12-06 18:09:48.369743] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:36.272 [2024-12-06 18:09:48.369765] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:36.272 [2024-12-06 18:09:48.369785] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:36.272 [2024-12-06 18:09:48.369800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.272 [2024-12-06 18:09:48.369811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:36.272 request: 00:12:36.272 { 00:12:36.272 "name": "raid_bdev1", 00:12:36.272 "raid_level": "raid1", 00:12:36.272 "base_bdevs": [ 00:12:36.272 "malloc1", 00:12:36.272 "malloc2", 00:12:36.272 "malloc3", 00:12:36.272 "malloc4" 00:12:36.272 ], 00:12:36.272 "superblock": false, 00:12:36.272 "method": "bdev_raid_create", 00:12:36.272 "req_id": 1 00:12:36.272 } 00:12:36.272 Got JSON-RPC error response 00:12:36.272 response: 00:12:36.272 { 00:12:36.272 "code": -17, 00:12:36.272 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:36.272 } 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.272 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.272 [2024-12-06 18:09:48.431322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:36.272 [2024-12-06 18:09:48.431386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.272 [2024-12-06 18:09:48.431406] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:36.272 [2024-12-06 18:09:48.431418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.272 [2024-12-06 18:09:48.434066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.272 [2024-12-06 18:09:48.434124] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:36.272 [2024-12-06 18:09:48.434221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:36.272 [2024-12-06 18:09:48.434287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:36.531 pt1 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.531 "name": "raid_bdev1", 00:12:36.531 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:36.531 "strip_size_kb": 0, 00:12:36.531 "state": "configuring", 00:12:36.531 "raid_level": "raid1", 00:12:36.531 "superblock": true, 00:12:36.531 "num_base_bdevs": 4, 00:12:36.531 "num_base_bdevs_discovered": 1, 00:12:36.531 "num_base_bdevs_operational": 4, 00:12:36.531 "base_bdevs_list": [ 00:12:36.531 { 00:12:36.531 "name": "pt1", 00:12:36.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.531 "is_configured": true, 00:12:36.531 "data_offset": 2048, 00:12:36.531 "data_size": 63488 00:12:36.531 }, 00:12:36.531 { 00:12:36.531 "name": null, 00:12:36.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.531 "is_configured": false, 00:12:36.531 "data_offset": 2048, 00:12:36.531 "data_size": 63488 00:12:36.531 }, 00:12:36.531 { 00:12:36.531 "name": null, 00:12:36.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:36.531 "is_configured": false, 00:12:36.531 "data_offset": 2048, 00:12:36.531 "data_size": 63488 00:12:36.531 }, 00:12:36.531 { 00:12:36.531 "name": null, 00:12:36.531 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:36.531 "is_configured": false, 00:12:36.531 "data_offset": 2048, 00:12:36.531 "data_size": 63488 00:12:36.531 } 00:12:36.531 ] 00:12:36.531 }' 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.531 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.789 [2024-12-06 18:09:48.906545] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:36.789 [2024-12-06 18:09:48.906643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.789 [2024-12-06 18:09:48.906669] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:36.789 [2024-12-06 18:09:48.906683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.789 [2024-12-06 18:09:48.907226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.789 [2024-12-06 18:09:48.907258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:36.789 [2024-12-06 18:09:48.907373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:36.789 [2024-12-06 18:09:48.907406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:36.789 pt2 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.789 [2024-12-06 18:09:48.918552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.789 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.047 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.047 "name": "raid_bdev1", 00:12:37.047 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:37.047 "strip_size_kb": 0, 00:12:37.047 "state": "configuring", 00:12:37.047 "raid_level": "raid1", 00:12:37.047 "superblock": true, 00:12:37.047 "num_base_bdevs": 4, 00:12:37.047 "num_base_bdevs_discovered": 1, 00:12:37.047 "num_base_bdevs_operational": 4, 00:12:37.047 "base_bdevs_list": [ 00:12:37.047 { 00:12:37.047 "name": "pt1", 00:12:37.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.047 "is_configured": true, 00:12:37.047 "data_offset": 2048, 00:12:37.047 "data_size": 63488 00:12:37.047 }, 00:12:37.047 { 00:12:37.047 "name": null, 00:12:37.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.047 "is_configured": false, 00:12:37.047 "data_offset": 0, 00:12:37.047 "data_size": 63488 00:12:37.047 }, 00:12:37.047 { 00:12:37.047 "name": null, 00:12:37.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.047 "is_configured": false, 00:12:37.047 "data_offset": 2048, 00:12:37.047 "data_size": 63488 00:12:37.047 }, 00:12:37.047 { 00:12:37.047 "name": null, 00:12:37.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.047 "is_configured": false, 00:12:37.047 "data_offset": 2048, 00:12:37.047 "data_size": 63488 00:12:37.047 } 00:12:37.047 ] 00:12:37.047 }' 00:12:37.047 18:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.047 18:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.363 [2024-12-06 18:09:49.389722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:37.363 [2024-12-06 18:09:49.389798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.363 [2024-12-06 18:09:49.389821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:37.363 [2024-12-06 18:09:49.389831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.363 [2024-12-06 18:09:49.390364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.363 [2024-12-06 18:09:49.390385] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:37.363 [2024-12-06 18:09:49.390482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:37.363 [2024-12-06 18:09:49.390506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:37.363 pt2 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.363 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.364 [2024-12-06 18:09:49.401677] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:37.364 [2024-12-06 18:09:49.401734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.364 [2024-12-06 18:09:49.401756] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:37.364 [2024-12-06 18:09:49.401766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.364 [2024-12-06 18:09:49.402219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.364 [2024-12-06 18:09:49.402242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:37.364 [2024-12-06 18:09:49.402324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:37.364 [2024-12-06 18:09:49.402349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:37.364 pt3 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.364 [2024-12-06 18:09:49.413613] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:37.364 [2024-12-06 18:09:49.413655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.364 [2024-12-06 18:09:49.413689] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:37.364 [2024-12-06 18:09:49.413698] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.364 [2024-12-06 18:09:49.414117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.364 [2024-12-06 18:09:49.414134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:37.364 [2024-12-06 18:09:49.414221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:37.364 [2024-12-06 18:09:49.414250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:37.364 [2024-12-06 18:09:49.414406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:37.364 [2024-12-06 18:09:49.414415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:37.364 [2024-12-06 18:09:49.414676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:37.364 [2024-12-06 18:09:49.414850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:37.364 [2024-12-06 18:09:49.414886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:37.364 [2024-12-06 18:09:49.415049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.364 pt4 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.364 "name": "raid_bdev1", 00:12:37.364 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:37.364 "strip_size_kb": 0, 00:12:37.364 "state": "online", 00:12:37.364 "raid_level": "raid1", 00:12:37.364 "superblock": true, 00:12:37.364 "num_base_bdevs": 4, 00:12:37.364 "num_base_bdevs_discovered": 4, 00:12:37.364 "num_base_bdevs_operational": 4, 00:12:37.364 "base_bdevs_list": [ 00:12:37.364 { 00:12:37.364 "name": "pt1", 00:12:37.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.364 "is_configured": true, 00:12:37.364 "data_offset": 2048, 00:12:37.364 "data_size": 63488 00:12:37.364 }, 00:12:37.364 { 00:12:37.364 "name": "pt2", 00:12:37.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.364 "is_configured": true, 00:12:37.364 "data_offset": 2048, 00:12:37.364 "data_size": 63488 00:12:37.364 }, 00:12:37.364 { 00:12:37.364 "name": "pt3", 00:12:37.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.364 "is_configured": true, 00:12:37.364 "data_offset": 2048, 00:12:37.364 "data_size": 63488 00:12:37.364 }, 00:12:37.364 { 00:12:37.364 "name": "pt4", 00:12:37.364 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.364 "is_configured": true, 00:12:37.364 "data_offset": 2048, 00:12:37.364 "data_size": 63488 00:12:37.364 } 00:12:37.364 ] 00:12:37.364 }' 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.364 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.930 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:37.930 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:37.930 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:37.930 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.931 [2024-12-06 18:09:49.885288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:37.931 "name": "raid_bdev1", 00:12:37.931 "aliases": [ 00:12:37.931 "d73d6be6-89e5-4cff-ade3-8a765426e60e" 00:12:37.931 ], 00:12:37.931 "product_name": "Raid Volume", 00:12:37.931 "block_size": 512, 00:12:37.931 "num_blocks": 63488, 00:12:37.931 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:37.931 "assigned_rate_limits": { 00:12:37.931 "rw_ios_per_sec": 0, 00:12:37.931 "rw_mbytes_per_sec": 0, 00:12:37.931 "r_mbytes_per_sec": 0, 00:12:37.931 "w_mbytes_per_sec": 0 00:12:37.931 }, 00:12:37.931 "claimed": false, 00:12:37.931 "zoned": false, 00:12:37.931 "supported_io_types": { 00:12:37.931 "read": true, 00:12:37.931 "write": true, 00:12:37.931 "unmap": false, 00:12:37.931 "flush": false, 00:12:37.931 "reset": true, 00:12:37.931 "nvme_admin": false, 00:12:37.931 "nvme_io": false, 00:12:37.931 "nvme_io_md": false, 00:12:37.931 "write_zeroes": true, 00:12:37.931 "zcopy": false, 00:12:37.931 "get_zone_info": false, 00:12:37.931 "zone_management": false, 00:12:37.931 "zone_append": false, 00:12:37.931 "compare": false, 00:12:37.931 "compare_and_write": false, 00:12:37.931 "abort": false, 00:12:37.931 "seek_hole": false, 00:12:37.931 "seek_data": false, 00:12:37.931 "copy": false, 00:12:37.931 "nvme_iov_md": false 00:12:37.931 }, 00:12:37.931 "memory_domains": [ 00:12:37.931 { 00:12:37.931 "dma_device_id": "system", 00:12:37.931 "dma_device_type": 1 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.931 "dma_device_type": 2 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "dma_device_id": "system", 00:12:37.931 "dma_device_type": 1 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.931 "dma_device_type": 2 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "dma_device_id": "system", 00:12:37.931 "dma_device_type": 1 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.931 "dma_device_type": 2 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "dma_device_id": "system", 00:12:37.931 "dma_device_type": 1 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.931 "dma_device_type": 2 00:12:37.931 } 00:12:37.931 ], 00:12:37.931 "driver_specific": { 00:12:37.931 "raid": { 00:12:37.931 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:37.931 "strip_size_kb": 0, 00:12:37.931 "state": "online", 00:12:37.931 "raid_level": "raid1", 00:12:37.931 "superblock": true, 00:12:37.931 "num_base_bdevs": 4, 00:12:37.931 "num_base_bdevs_discovered": 4, 00:12:37.931 "num_base_bdevs_operational": 4, 00:12:37.931 "base_bdevs_list": [ 00:12:37.931 { 00:12:37.931 "name": "pt1", 00:12:37.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.931 "is_configured": true, 00:12:37.931 "data_offset": 2048, 00:12:37.931 "data_size": 63488 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "name": "pt2", 00:12:37.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.931 "is_configured": true, 00:12:37.931 "data_offset": 2048, 00:12:37.931 "data_size": 63488 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "name": "pt3", 00:12:37.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.931 "is_configured": true, 00:12:37.931 "data_offset": 2048, 00:12:37.931 "data_size": 63488 00:12:37.931 }, 00:12:37.931 { 00:12:37.931 "name": "pt4", 00:12:37.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.931 "is_configured": true, 00:12:37.931 "data_offset": 2048, 00:12:37.931 "data_size": 63488 00:12:37.931 } 00:12:37.931 ] 00:12:37.931 } 00:12:37.931 } 00:12:37.931 }' 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:37.931 pt2 00:12:37.931 pt3 00:12:37.931 pt4' 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.931 18:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.931 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:38.191 [2024-12-06 18:09:50.184702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d73d6be6-89e5-4cff-ade3-8a765426e60e '!=' d73d6be6-89e5-4cff-ade3-8a765426e60e ']' 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.191 [2024-12-06 18:09:50.232335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.191 "name": "raid_bdev1", 00:12:38.191 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:38.191 "strip_size_kb": 0, 00:12:38.191 "state": "online", 00:12:38.191 "raid_level": "raid1", 00:12:38.191 "superblock": true, 00:12:38.191 "num_base_bdevs": 4, 00:12:38.191 "num_base_bdevs_discovered": 3, 00:12:38.191 "num_base_bdevs_operational": 3, 00:12:38.191 "base_bdevs_list": [ 00:12:38.191 { 00:12:38.191 "name": null, 00:12:38.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.191 "is_configured": false, 00:12:38.191 "data_offset": 0, 00:12:38.191 "data_size": 63488 00:12:38.191 }, 00:12:38.191 { 00:12:38.191 "name": "pt2", 00:12:38.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.191 "is_configured": true, 00:12:38.191 "data_offset": 2048, 00:12:38.191 "data_size": 63488 00:12:38.191 }, 00:12:38.191 { 00:12:38.191 "name": "pt3", 00:12:38.191 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.191 "is_configured": true, 00:12:38.191 "data_offset": 2048, 00:12:38.191 "data_size": 63488 00:12:38.191 }, 00:12:38.191 { 00:12:38.191 "name": "pt4", 00:12:38.191 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.191 "is_configured": true, 00:12:38.191 "data_offset": 2048, 00:12:38.191 "data_size": 63488 00:12:38.191 } 00:12:38.191 ] 00:12:38.191 }' 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.191 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.758 [2024-12-06 18:09:50.631609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.758 [2024-12-06 18:09:50.631649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.758 [2024-12-06 18:09:50.631749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.758 [2024-12-06 18:09:50.631841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.758 [2024-12-06 18:09:50.631857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.758 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.758 [2024-12-06 18:09:50.711472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:38.758 [2024-12-06 18:09:50.711527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.758 [2024-12-06 18:09:50.711548] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:38.758 [2024-12-06 18:09:50.711557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.758 [2024-12-06 18:09:50.713921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.758 [2024-12-06 18:09:50.713959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:38.758 [2024-12-06 18:09:50.714046] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:38.759 [2024-12-06 18:09:50.714127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:38.759 pt2 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.759 "name": "raid_bdev1", 00:12:38.759 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:38.759 "strip_size_kb": 0, 00:12:38.759 "state": "configuring", 00:12:38.759 "raid_level": "raid1", 00:12:38.759 "superblock": true, 00:12:38.759 "num_base_bdevs": 4, 00:12:38.759 "num_base_bdevs_discovered": 1, 00:12:38.759 "num_base_bdevs_operational": 3, 00:12:38.759 "base_bdevs_list": [ 00:12:38.759 { 00:12:38.759 "name": null, 00:12:38.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.759 "is_configured": false, 00:12:38.759 "data_offset": 2048, 00:12:38.759 "data_size": 63488 00:12:38.759 }, 00:12:38.759 { 00:12:38.759 "name": "pt2", 00:12:38.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.759 "is_configured": true, 00:12:38.759 "data_offset": 2048, 00:12:38.759 "data_size": 63488 00:12:38.759 }, 00:12:38.759 { 00:12:38.759 "name": null, 00:12:38.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:38.759 "is_configured": false, 00:12:38.759 "data_offset": 2048, 00:12:38.759 "data_size": 63488 00:12:38.759 }, 00:12:38.759 { 00:12:38.759 "name": null, 00:12:38.759 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:38.759 "is_configured": false, 00:12:38.759 "data_offset": 2048, 00:12:38.759 "data_size": 63488 00:12:38.759 } 00:12:38.759 ] 00:12:38.759 }' 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.759 18:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 [2024-12-06 18:09:51.222837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:39.326 [2024-12-06 18:09:51.222909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.326 [2024-12-06 18:09:51.222932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:39.326 [2024-12-06 18:09:51.222943] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.326 [2024-12-06 18:09:51.223469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.326 [2024-12-06 18:09:51.223491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:39.326 [2024-12-06 18:09:51.223590] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:39.326 [2024-12-06 18:09:51.223615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:39.326 pt3 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.326 "name": "raid_bdev1", 00:12:39.326 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:39.326 "strip_size_kb": 0, 00:12:39.326 "state": "configuring", 00:12:39.326 "raid_level": "raid1", 00:12:39.326 "superblock": true, 00:12:39.326 "num_base_bdevs": 4, 00:12:39.326 "num_base_bdevs_discovered": 2, 00:12:39.326 "num_base_bdevs_operational": 3, 00:12:39.326 "base_bdevs_list": [ 00:12:39.326 { 00:12:39.326 "name": null, 00:12:39.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.326 "is_configured": false, 00:12:39.326 "data_offset": 2048, 00:12:39.326 "data_size": 63488 00:12:39.326 }, 00:12:39.326 { 00:12:39.326 "name": "pt2", 00:12:39.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.326 "is_configured": true, 00:12:39.326 "data_offset": 2048, 00:12:39.326 "data_size": 63488 00:12:39.326 }, 00:12:39.326 { 00:12:39.326 "name": "pt3", 00:12:39.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.326 "is_configured": true, 00:12:39.326 "data_offset": 2048, 00:12:39.326 "data_size": 63488 00:12:39.326 }, 00:12:39.326 { 00:12:39.326 "name": null, 00:12:39.326 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.326 "is_configured": false, 00:12:39.326 "data_offset": 2048, 00:12:39.326 "data_size": 63488 00:12:39.326 } 00:12:39.326 ] 00:12:39.326 }' 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.326 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.585 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:39.585 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:39.585 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:39.585 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:39.585 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.585 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.585 [2024-12-06 18:09:51.722000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:39.585 [2024-12-06 18:09:51.722093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.585 [2024-12-06 18:09:51.722124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:39.585 [2024-12-06 18:09:51.722135] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.585 [2024-12-06 18:09:51.722651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.585 [2024-12-06 18:09:51.722671] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:39.585 [2024-12-06 18:09:51.722773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:39.585 [2024-12-06 18:09:51.722800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:39.585 [2024-12-06 18:09:51.722960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:39.585 [2024-12-06 18:09:51.722970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.585 [2024-12-06 18:09:51.723258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:39.586 [2024-12-06 18:09:51.723442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:39.586 [2024-12-06 18:09:51.723457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:39.586 [2024-12-06 18:09:51.723635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.586 pt4 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.586 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.846 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.846 "name": "raid_bdev1", 00:12:39.846 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:39.846 "strip_size_kb": 0, 00:12:39.846 "state": "online", 00:12:39.846 "raid_level": "raid1", 00:12:39.846 "superblock": true, 00:12:39.846 "num_base_bdevs": 4, 00:12:39.846 "num_base_bdevs_discovered": 3, 00:12:39.846 "num_base_bdevs_operational": 3, 00:12:39.846 "base_bdevs_list": [ 00:12:39.846 { 00:12:39.846 "name": null, 00:12:39.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.846 "is_configured": false, 00:12:39.846 "data_offset": 2048, 00:12:39.846 "data_size": 63488 00:12:39.846 }, 00:12:39.846 { 00:12:39.846 "name": "pt2", 00:12:39.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:39.846 "is_configured": true, 00:12:39.846 "data_offset": 2048, 00:12:39.846 "data_size": 63488 00:12:39.846 }, 00:12:39.846 { 00:12:39.846 "name": "pt3", 00:12:39.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:39.846 "is_configured": true, 00:12:39.846 "data_offset": 2048, 00:12:39.846 "data_size": 63488 00:12:39.846 }, 00:12:39.846 { 00:12:39.846 "name": "pt4", 00:12:39.846 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:39.846 "is_configured": true, 00:12:39.846 "data_offset": 2048, 00:12:39.846 "data_size": 63488 00:12:39.846 } 00:12:39.846 ] 00:12:39.846 }' 00:12:39.846 18:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.846 18:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.105 [2024-12-06 18:09:52.181184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.105 [2024-12-06 18:09:52.181222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.105 [2024-12-06 18:09:52.181317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.105 [2024-12-06 18:09:52.181406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.105 [2024-12-06 18:09:52.181427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.105 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.105 [2024-12-06 18:09:52.241084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:40.105 [2024-12-06 18:09:52.241160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.105 [2024-12-06 18:09:52.241182] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:40.105 [2024-12-06 18:09:52.241198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.105 [2024-12-06 18:09:52.243679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.105 [2024-12-06 18:09:52.243722] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:40.105 [2024-12-06 18:09:52.243822] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:40.106 [2024-12-06 18:09:52.243893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:40.106 [2024-12-06 18:09:52.244051] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:40.106 [2024-12-06 18:09:52.244087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.106 [2024-12-06 18:09:52.244107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:40.106 [2024-12-06 18:09:52.244184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:40.106 [2024-12-06 18:09:52.244318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:40.106 pt1 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.106 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.364 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.364 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.364 "name": "raid_bdev1", 00:12:40.364 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:40.364 "strip_size_kb": 0, 00:12:40.364 "state": "configuring", 00:12:40.364 "raid_level": "raid1", 00:12:40.364 "superblock": true, 00:12:40.364 "num_base_bdevs": 4, 00:12:40.364 "num_base_bdevs_discovered": 2, 00:12:40.364 "num_base_bdevs_operational": 3, 00:12:40.364 "base_bdevs_list": [ 00:12:40.364 { 00:12:40.364 "name": null, 00:12:40.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.364 "is_configured": false, 00:12:40.364 "data_offset": 2048, 00:12:40.364 "data_size": 63488 00:12:40.364 }, 00:12:40.364 { 00:12:40.364 "name": "pt2", 00:12:40.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.364 "is_configured": true, 00:12:40.364 "data_offset": 2048, 00:12:40.364 "data_size": 63488 00:12:40.364 }, 00:12:40.364 { 00:12:40.364 "name": "pt3", 00:12:40.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.364 "is_configured": true, 00:12:40.364 "data_offset": 2048, 00:12:40.364 "data_size": 63488 00:12:40.364 }, 00:12:40.364 { 00:12:40.364 "name": null, 00:12:40.364 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.364 "is_configured": false, 00:12:40.364 "data_offset": 2048, 00:12:40.364 "data_size": 63488 00:12:40.364 } 00:12:40.364 ] 00:12:40.364 }' 00:12:40.364 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.364 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.623 [2024-12-06 18:09:52.720328] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:40.623 [2024-12-06 18:09:52.720410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.623 [2024-12-06 18:09:52.720438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:40.623 [2024-12-06 18:09:52.720449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.623 [2024-12-06 18:09:52.720979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.623 [2024-12-06 18:09:52.720999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:40.623 [2024-12-06 18:09:52.721123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:40.623 [2024-12-06 18:09:52.721151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:40.623 [2024-12-06 18:09:52.721317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:40.623 [2024-12-06 18:09:52.721327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:40.623 [2024-12-06 18:09:52.721617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:40.623 [2024-12-06 18:09:52.721795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:40.623 [2024-12-06 18:09:52.721815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:40.623 [2024-12-06 18:09:52.721969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.623 pt4 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.623 "name": "raid_bdev1", 00:12:40.623 "uuid": "d73d6be6-89e5-4cff-ade3-8a765426e60e", 00:12:40.623 "strip_size_kb": 0, 00:12:40.623 "state": "online", 00:12:40.623 "raid_level": "raid1", 00:12:40.623 "superblock": true, 00:12:40.623 "num_base_bdevs": 4, 00:12:40.623 "num_base_bdevs_discovered": 3, 00:12:40.623 "num_base_bdevs_operational": 3, 00:12:40.623 "base_bdevs_list": [ 00:12:40.623 { 00:12:40.623 "name": null, 00:12:40.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.623 "is_configured": false, 00:12:40.623 "data_offset": 2048, 00:12:40.623 "data_size": 63488 00:12:40.623 }, 00:12:40.623 { 00:12:40.623 "name": "pt2", 00:12:40.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:40.623 "is_configured": true, 00:12:40.623 "data_offset": 2048, 00:12:40.623 "data_size": 63488 00:12:40.623 }, 00:12:40.623 { 00:12:40.623 "name": "pt3", 00:12:40.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:40.623 "is_configured": true, 00:12:40.623 "data_offset": 2048, 00:12:40.623 "data_size": 63488 00:12:40.623 }, 00:12:40.623 { 00:12:40.623 "name": "pt4", 00:12:40.623 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:40.623 "is_configured": true, 00:12:40.623 "data_offset": 2048, 00:12:40.623 "data_size": 63488 00:12:40.623 } 00:12:40.623 ] 00:12:40.623 }' 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.623 18:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.191 [2024-12-06 18:09:53.271774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d73d6be6-89e5-4cff-ade3-8a765426e60e '!=' d73d6be6-89e5-4cff-ade3-8a765426e60e ']' 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75018 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75018 ']' 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75018 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75018 00:12:41.191 killing process with pid 75018 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75018' 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75018 00:12:41.191 [2024-12-06 18:09:53.344372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.191 [2024-12-06 18:09:53.344483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.191 18:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75018 00:12:41.191 [2024-12-06 18:09:53.344574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.191 [2024-12-06 18:09:53.344588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:41.757 [2024-12-06 18:09:53.790855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.134 18:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:43.134 00:12:43.134 real 0m8.865s 00:12:43.134 user 0m13.947s 00:12:43.134 sys 0m1.565s 00:12:43.134 18:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.134 18:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.134 ************************************ 00:12:43.134 END TEST raid_superblock_test 00:12:43.134 ************************************ 00:12:43.134 18:09:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:43.134 18:09:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:43.134 18:09:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.134 18:09:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.134 ************************************ 00:12:43.134 START TEST raid_read_error_test 00:12:43.134 ************************************ 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Vpu4rvVcwq 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75511 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75511 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75511 ']' 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.134 18:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.134 [2024-12-06 18:09:55.190318] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:12:43.134 [2024-12-06 18:09:55.190452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75511 ] 00:12:43.392 [2024-12-06 18:09:55.367375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.392 [2024-12-06 18:09:55.490237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.650 [2024-12-06 18:09:55.713248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.650 [2024-12-06 18:09:55.713296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 BaseBdev1_malloc 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 true 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 [2024-12-06 18:09:56.152590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:44.216 [2024-12-06 18:09:56.152648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.216 [2024-12-06 18:09:56.152671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:44.216 [2024-12-06 18:09:56.152684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.216 [2024-12-06 18:09:56.155013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.216 [2024-12-06 18:09:56.155053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.216 BaseBdev1 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 BaseBdev2_malloc 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 true 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 [2024-12-06 18:09:56.223039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:44.216 [2024-12-06 18:09:56.223098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.216 [2024-12-06 18:09:56.223115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:44.216 [2024-12-06 18:09:56.223125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.216 [2024-12-06 18:09:56.225528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.216 [2024-12-06 18:09:56.225563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:44.216 BaseBdev2 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 BaseBdev3_malloc 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 true 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 [2024-12-06 18:09:56.302572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:44.216 [2024-12-06 18:09:56.302635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.216 [2024-12-06 18:09:56.302654] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:44.216 [2024-12-06 18:09:56.302665] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.216 [2024-12-06 18:09:56.304898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.216 [2024-12-06 18:09:56.304935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.216 BaseBdev3 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.216 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.217 BaseBdev4_malloc 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.217 true 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.217 [2024-12-06 18:09:56.370303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:44.217 [2024-12-06 18:09:56.370370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.217 [2024-12-06 18:09:56.370390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:44.217 [2024-12-06 18:09:56.370401] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.217 [2024-12-06 18:09:56.372796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.217 [2024-12-06 18:09:56.372839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:44.217 BaseBdev4 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.217 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.475 [2024-12-06 18:09:56.382344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.475 [2024-12-06 18:09:56.384325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.475 [2024-12-06 18:09:56.384427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.475 [2024-12-06 18:09:56.384505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.475 [2024-12-06 18:09:56.384737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:44.475 [2024-12-06 18:09:56.384757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:44.475 [2024-12-06 18:09:56.385003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:44.475 [2024-12-06 18:09:56.385203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:44.475 [2024-12-06 18:09:56.385229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:44.475 [2024-12-06 18:09:56.385420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.475 "name": "raid_bdev1", 00:12:44.475 "uuid": "37ea08ab-f7bb-4b4a-9f5f-cf061bf7dbba", 00:12:44.475 "strip_size_kb": 0, 00:12:44.475 "state": "online", 00:12:44.475 "raid_level": "raid1", 00:12:44.475 "superblock": true, 00:12:44.475 "num_base_bdevs": 4, 00:12:44.475 "num_base_bdevs_discovered": 4, 00:12:44.475 "num_base_bdevs_operational": 4, 00:12:44.475 "base_bdevs_list": [ 00:12:44.475 { 00:12:44.475 "name": "BaseBdev1", 00:12:44.475 "uuid": "1e55e9d6-6e9e-584f-ada7-fd226f67be9b", 00:12:44.475 "is_configured": true, 00:12:44.475 "data_offset": 2048, 00:12:44.475 "data_size": 63488 00:12:44.475 }, 00:12:44.475 { 00:12:44.475 "name": "BaseBdev2", 00:12:44.475 "uuid": "ae55f6d0-4b95-5e01-96f6-ea448eee5337", 00:12:44.475 "is_configured": true, 00:12:44.475 "data_offset": 2048, 00:12:44.475 "data_size": 63488 00:12:44.475 }, 00:12:44.475 { 00:12:44.475 "name": "BaseBdev3", 00:12:44.475 "uuid": "6838a7ec-906b-5e95-8b1d-5f761c2f6ac2", 00:12:44.475 "is_configured": true, 00:12:44.475 "data_offset": 2048, 00:12:44.475 "data_size": 63488 00:12:44.475 }, 00:12:44.475 { 00:12:44.475 "name": "BaseBdev4", 00:12:44.475 "uuid": "732b993a-a48b-52a7-a06a-4eecc40b733d", 00:12:44.475 "is_configured": true, 00:12:44.475 "data_offset": 2048, 00:12:44.475 "data_size": 63488 00:12:44.475 } 00:12:44.475 ] 00:12:44.475 }' 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.475 18:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.734 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:44.734 18:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:44.992 [2024-12-06 18:09:56.958774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:45.933 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:45.933 18:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.934 "name": "raid_bdev1", 00:12:45.934 "uuid": "37ea08ab-f7bb-4b4a-9f5f-cf061bf7dbba", 00:12:45.934 "strip_size_kb": 0, 00:12:45.934 "state": "online", 00:12:45.934 "raid_level": "raid1", 00:12:45.934 "superblock": true, 00:12:45.934 "num_base_bdevs": 4, 00:12:45.934 "num_base_bdevs_discovered": 4, 00:12:45.934 "num_base_bdevs_operational": 4, 00:12:45.934 "base_bdevs_list": [ 00:12:45.934 { 00:12:45.934 "name": "BaseBdev1", 00:12:45.934 "uuid": "1e55e9d6-6e9e-584f-ada7-fd226f67be9b", 00:12:45.934 "is_configured": true, 00:12:45.934 "data_offset": 2048, 00:12:45.934 "data_size": 63488 00:12:45.934 }, 00:12:45.934 { 00:12:45.934 "name": "BaseBdev2", 00:12:45.934 "uuid": "ae55f6d0-4b95-5e01-96f6-ea448eee5337", 00:12:45.934 "is_configured": true, 00:12:45.934 "data_offset": 2048, 00:12:45.934 "data_size": 63488 00:12:45.934 }, 00:12:45.934 { 00:12:45.934 "name": "BaseBdev3", 00:12:45.934 "uuid": "6838a7ec-906b-5e95-8b1d-5f761c2f6ac2", 00:12:45.934 "is_configured": true, 00:12:45.934 "data_offset": 2048, 00:12:45.934 "data_size": 63488 00:12:45.934 }, 00:12:45.934 { 00:12:45.934 "name": "BaseBdev4", 00:12:45.934 "uuid": "732b993a-a48b-52a7-a06a-4eecc40b733d", 00:12:45.934 "is_configured": true, 00:12:45.934 "data_offset": 2048, 00:12:45.934 "data_size": 63488 00:12:45.934 } 00:12:45.934 ] 00:12:45.934 }' 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.934 18:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.192 18:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.192 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.192 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.451 [2024-12-06 18:09:58.362503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.451 [2024-12-06 18:09:58.362543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.451 [2024-12-06 18:09:58.365552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.451 [2024-12-06 18:09:58.365625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.451 [2024-12-06 18:09:58.365755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.451 [2024-12-06 18:09:58.365774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:46.451 { 00:12:46.451 "results": [ 00:12:46.451 { 00:12:46.451 "job": "raid_bdev1", 00:12:46.451 "core_mask": "0x1", 00:12:46.451 "workload": "randrw", 00:12:46.451 "percentage": 50, 00:12:46.451 "status": "finished", 00:12:46.451 "queue_depth": 1, 00:12:46.451 "io_size": 131072, 00:12:46.451 "runtime": 1.404605, 00:12:46.451 "iops": 9784.24539283286, 00:12:46.451 "mibps": 1223.0306741041074, 00:12:46.451 "io_failed": 0, 00:12:46.451 "io_timeout": 0, 00:12:46.451 "avg_latency_us": 99.19003440258749, 00:12:46.451 "min_latency_us": 24.929257641921396, 00:12:46.451 "max_latency_us": 1731.4096069868995 00:12:46.451 } 00:12:46.451 ], 00:12:46.451 "core_count": 1 00:12:46.451 } 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75511 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75511 ']' 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75511 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75511 00:12:46.451 killing process with pid 75511 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75511' 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75511 00:12:46.451 18:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75511 00:12:46.451 [2024-12-06 18:09:58.412446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.717 [2024-12-06 18:09:58.774897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Vpu4rvVcwq 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:48.129 00:12:48.129 real 0m4.979s 00:12:48.129 user 0m5.959s 00:12:48.129 sys 0m0.587s 00:12:48.129 ************************************ 00:12:48.129 END TEST raid_read_error_test 00:12:48.129 ************************************ 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.129 18:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.129 18:10:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:48.129 18:10:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:48.129 18:10:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.129 18:10:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.129 ************************************ 00:12:48.129 START TEST raid_write_error_test 00:12:48.129 ************************************ 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZjFDae9uFq 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75662 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75662 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75662 ']' 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.129 18:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.129 [2024-12-06 18:10:00.232689] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:12:48.129 [2024-12-06 18:10:00.232821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75662 ] 00:12:48.389 [2024-12-06 18:10:00.408564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.389 [2024-12-06 18:10:00.539903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.648 [2024-12-06 18:10:00.755450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.648 [2024-12-06 18:10:00.755510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.216 BaseBdev1_malloc 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.216 true 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.216 [2024-12-06 18:10:01.141739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:49.216 [2024-12-06 18:10:01.141793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.216 [2024-12-06 18:10:01.141814] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:49.216 [2024-12-06 18:10:01.141826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.216 [2024-12-06 18:10:01.144135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.216 [2024-12-06 18:10:01.144174] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:49.216 BaseBdev1 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.216 BaseBdev2_malloc 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.216 true 00:12:49.216 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.217 [2024-12-06 18:10:01.212852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:49.217 [2024-12-06 18:10:01.212945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.217 [2024-12-06 18:10:01.212978] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:49.217 [2024-12-06 18:10:01.213007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.217 [2024-12-06 18:10:01.215918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.217 [2024-12-06 18:10:01.215982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:49.217 BaseBdev2 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.217 BaseBdev3_malloc 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.217 true 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.217 [2024-12-06 18:10:01.312038] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:49.217 [2024-12-06 18:10:01.312108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.217 [2024-12-06 18:10:01.312131] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:49.217 [2024-12-06 18:10:01.312144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.217 [2024-12-06 18:10:01.314410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.217 [2024-12-06 18:10:01.314447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:49.217 BaseBdev3 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.217 BaseBdev4_malloc 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.217 true 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.217 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.217 [2024-12-06 18:10:01.380743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:49.217 [2024-12-06 18:10:01.380808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.217 [2024-12-06 18:10:01.380831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:49.217 [2024-12-06 18:10:01.380844] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.478 [2024-12-06 18:10:01.383249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.478 [2024-12-06 18:10:01.383289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:49.478 BaseBdev4 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.478 [2024-12-06 18:10:01.392779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.478 [2024-12-06 18:10:01.394807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.478 [2024-12-06 18:10:01.394897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.478 [2024-12-06 18:10:01.394962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:49.478 [2024-12-06 18:10:01.395216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:49.478 [2024-12-06 18:10:01.395240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:49.478 [2024-12-06 18:10:01.395547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:49.478 [2024-12-06 18:10:01.395753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:49.478 [2024-12-06 18:10:01.395772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:49.478 [2024-12-06 18:10:01.395962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.478 "name": "raid_bdev1", 00:12:49.478 "uuid": "491f5021-67b7-4f64-b138-c84ddb55225c", 00:12:49.478 "strip_size_kb": 0, 00:12:49.478 "state": "online", 00:12:49.478 "raid_level": "raid1", 00:12:49.478 "superblock": true, 00:12:49.478 "num_base_bdevs": 4, 00:12:49.478 "num_base_bdevs_discovered": 4, 00:12:49.478 "num_base_bdevs_operational": 4, 00:12:49.478 "base_bdevs_list": [ 00:12:49.478 { 00:12:49.478 "name": "BaseBdev1", 00:12:49.478 "uuid": "db3d3556-7b1c-54db-9158-d8717cb4434c", 00:12:49.478 "is_configured": true, 00:12:49.478 "data_offset": 2048, 00:12:49.478 "data_size": 63488 00:12:49.478 }, 00:12:49.478 { 00:12:49.478 "name": "BaseBdev2", 00:12:49.478 "uuid": "5a4e2b81-bfcf-5e0b-8460-96a018a92fef", 00:12:49.478 "is_configured": true, 00:12:49.478 "data_offset": 2048, 00:12:49.478 "data_size": 63488 00:12:49.478 }, 00:12:49.478 { 00:12:49.478 "name": "BaseBdev3", 00:12:49.478 "uuid": "f82be1a5-fe2d-5d09-86eb-bf78f6a45dab", 00:12:49.478 "is_configured": true, 00:12:49.478 "data_offset": 2048, 00:12:49.478 "data_size": 63488 00:12:49.478 }, 00:12:49.478 { 00:12:49.478 "name": "BaseBdev4", 00:12:49.478 "uuid": "96f9d7d0-3251-5050-a103-6b3ed567f132", 00:12:49.478 "is_configured": true, 00:12:49.478 "data_offset": 2048, 00:12:49.478 "data_size": 63488 00:12:49.478 } 00:12:49.478 ] 00:12:49.478 }' 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.478 18:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.736 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:49.736 18:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:49.995 [2024-12-06 18:10:01.958695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:50.933 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.934 [2024-12-06 18:10:02.865906] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:50.934 [2024-12-06 18:10:02.865988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.934 [2024-12-06 18:10:02.866272] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.934 "name": "raid_bdev1", 00:12:50.934 "uuid": "491f5021-67b7-4f64-b138-c84ddb55225c", 00:12:50.934 "strip_size_kb": 0, 00:12:50.934 "state": "online", 00:12:50.934 "raid_level": "raid1", 00:12:50.934 "superblock": true, 00:12:50.934 "num_base_bdevs": 4, 00:12:50.934 "num_base_bdevs_discovered": 3, 00:12:50.934 "num_base_bdevs_operational": 3, 00:12:50.934 "base_bdevs_list": [ 00:12:50.934 { 00:12:50.934 "name": null, 00:12:50.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.934 "is_configured": false, 00:12:50.934 "data_offset": 0, 00:12:50.934 "data_size": 63488 00:12:50.934 }, 00:12:50.934 { 00:12:50.934 "name": "BaseBdev2", 00:12:50.934 "uuid": "5a4e2b81-bfcf-5e0b-8460-96a018a92fef", 00:12:50.934 "is_configured": true, 00:12:50.934 "data_offset": 2048, 00:12:50.934 "data_size": 63488 00:12:50.934 }, 00:12:50.934 { 00:12:50.934 "name": "BaseBdev3", 00:12:50.934 "uuid": "f82be1a5-fe2d-5d09-86eb-bf78f6a45dab", 00:12:50.934 "is_configured": true, 00:12:50.934 "data_offset": 2048, 00:12:50.934 "data_size": 63488 00:12:50.934 }, 00:12:50.934 { 00:12:50.934 "name": "BaseBdev4", 00:12:50.934 "uuid": "96f9d7d0-3251-5050-a103-6b3ed567f132", 00:12:50.934 "is_configured": true, 00:12:50.934 "data_offset": 2048, 00:12:50.934 "data_size": 63488 00:12:50.934 } 00:12:50.934 ] 00:12:50.934 }' 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.934 18:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.194 [2024-12-06 18:10:03.314038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.194 [2024-12-06 18:10:03.314088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.194 [2024-12-06 18:10:03.317008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.194 [2024-12-06 18:10:03.317060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.194 [2024-12-06 18:10:03.317218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.194 [2024-12-06 18:10:03.317234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:51.194 { 00:12:51.194 "results": [ 00:12:51.194 { 00:12:51.194 "job": "raid_bdev1", 00:12:51.194 "core_mask": "0x1", 00:12:51.194 "workload": "randrw", 00:12:51.194 "percentage": 50, 00:12:51.194 "status": "finished", 00:12:51.194 "queue_depth": 1, 00:12:51.194 "io_size": 131072, 00:12:51.194 "runtime": 1.356047, 00:12:51.194 "iops": 10693.582154600836, 00:12:51.194 "mibps": 1336.6977693251044, 00:12:51.194 "io_failed": 0, 00:12:51.194 "io_timeout": 0, 00:12:51.194 "avg_latency_us": 90.56033660078856, 00:12:51.194 "min_latency_us": 25.041048034934498, 00:12:51.194 "max_latency_us": 1817.2646288209608 00:12:51.194 } 00:12:51.194 ], 00:12:51.194 "core_count": 1 00:12:51.194 } 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75662 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75662 ']' 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75662 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75662 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.194 killing process with pid 75662 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75662' 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75662 00:12:51.194 [2024-12-06 18:10:03.358120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.194 18:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75662 00:12:51.762 [2024-12-06 18:10:03.698184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZjFDae9uFq 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:53.143 00:12:53.143 real 0m4.841s 00:12:53.143 user 0m5.706s 00:12:53.143 sys 0m0.600s 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.143 18:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.143 ************************************ 00:12:53.143 END TEST raid_write_error_test 00:12:53.143 ************************************ 00:12:53.143 18:10:05 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:53.143 18:10:05 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:53.143 18:10:05 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:53.143 18:10:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:53.143 18:10:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.143 18:10:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.143 ************************************ 00:12:53.143 START TEST raid_rebuild_test 00:12:53.143 ************************************ 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75807 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75807 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75807 ']' 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.143 18:10:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.143 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:53.143 Zero copy mechanism will not be used. 00:12:53.143 [2024-12-06 18:10:05.143249] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:12:53.143 [2024-12-06 18:10:05.143398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75807 ] 00:12:53.402 [2024-12-06 18:10:05.319425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.402 [2024-12-06 18:10:05.435819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.661 [2024-12-06 18:10:05.643475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.661 [2024-12-06 18:10:05.643512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.919 BaseBdev1_malloc 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.919 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 [2024-12-06 18:10:06.088431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:54.178 [2024-12-06 18:10:06.088503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.178 [2024-12-06 18:10:06.088531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:54.178 [2024-12-06 18:10:06.088545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.178 [2024-12-06 18:10:06.090970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.178 [2024-12-06 18:10:06.091014] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:54.178 BaseBdev1 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 BaseBdev2_malloc 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 [2024-12-06 18:10:06.144426] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:54.178 [2024-12-06 18:10:06.144490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.178 [2024-12-06 18:10:06.144517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:54.178 [2024-12-06 18:10:06.144530] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.178 [2024-12-06 18:10:06.146852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.178 [2024-12-06 18:10:06.146893] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:54.178 BaseBdev2 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 spare_malloc 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 spare_delay 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 [2024-12-06 18:10:06.225509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:54.178 [2024-12-06 18:10:06.225573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.178 [2024-12-06 18:10:06.225596] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:54.178 [2024-12-06 18:10:06.225607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.178 [2024-12-06 18:10:06.227823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.178 [2024-12-06 18:10:06.227922] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:54.178 spare 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.178 [2024-12-06 18:10:06.233547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.178 [2024-12-06 18:10:06.235464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.178 [2024-12-06 18:10:06.235614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:54.178 [2024-12-06 18:10:06.235666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:54.178 [2024-12-06 18:10:06.235984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:54.178 [2024-12-06 18:10:06.236222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:54.178 [2024-12-06 18:10:06.236274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:54.178 [2024-12-06 18:10:06.236520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.178 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.179 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.179 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.179 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.179 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.179 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.179 "name": "raid_bdev1", 00:12:54.179 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:12:54.179 "strip_size_kb": 0, 00:12:54.179 "state": "online", 00:12:54.179 "raid_level": "raid1", 00:12:54.179 "superblock": false, 00:12:54.179 "num_base_bdevs": 2, 00:12:54.179 "num_base_bdevs_discovered": 2, 00:12:54.179 "num_base_bdevs_operational": 2, 00:12:54.179 "base_bdevs_list": [ 00:12:54.179 { 00:12:54.179 "name": "BaseBdev1", 00:12:54.179 "uuid": "f06f5793-fe58-5249-9968-272eb3051dec", 00:12:54.179 "is_configured": true, 00:12:54.179 "data_offset": 0, 00:12:54.179 "data_size": 65536 00:12:54.179 }, 00:12:54.179 { 00:12:54.179 "name": "BaseBdev2", 00:12:54.179 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:12:54.179 "is_configured": true, 00:12:54.179 "data_offset": 0, 00:12:54.179 "data_size": 65536 00:12:54.179 } 00:12:54.179 ] 00:12:54.179 }' 00:12:54.179 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.179 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.747 [2024-12-06 18:10:06.713129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:54.747 18:10:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:55.006 [2024-12-06 18:10:06.992402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:55.006 /dev/nbd0 00:12:55.006 18:10:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:55.006 18:10:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:55.006 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:55.006 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:55.006 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:55.006 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:55.006 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.007 1+0 records in 00:12:55.007 1+0 records out 00:12:55.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335379 s, 12.2 MB/s 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:55.007 18:10:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:59.312 65536+0 records in 00:12:59.312 65536+0 records out 00:12:59.312 33554432 bytes (34 MB, 32 MiB) copied, 4.30846 s, 7.8 MB/s 00:12:59.312 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.312 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.312 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.312 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.312 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:59.312 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.312 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.580 [2024-12-06 18:10:11.597308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.580 [2024-12-06 18:10:11.617416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.580 "name": "raid_bdev1", 00:12:59.580 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:12:59.580 "strip_size_kb": 0, 00:12:59.580 "state": "online", 00:12:59.580 "raid_level": "raid1", 00:12:59.580 "superblock": false, 00:12:59.580 "num_base_bdevs": 2, 00:12:59.580 "num_base_bdevs_discovered": 1, 00:12:59.580 "num_base_bdevs_operational": 1, 00:12:59.580 "base_bdevs_list": [ 00:12:59.580 { 00:12:59.580 "name": null, 00:12:59.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.580 "is_configured": false, 00:12:59.580 "data_offset": 0, 00:12:59.580 "data_size": 65536 00:12:59.580 }, 00:12:59.580 { 00:12:59.580 "name": "BaseBdev2", 00:12:59.580 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:12:59.580 "is_configured": true, 00:12:59.580 "data_offset": 0, 00:12:59.580 "data_size": 65536 00:12:59.580 } 00:12:59.580 ] 00:12:59.580 }' 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.580 18:10:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.149 18:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.149 18:10:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.149 18:10:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.149 [2024-12-06 18:10:12.084678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.149 [2024-12-06 18:10:12.102034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:00.149 18:10:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.149 18:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:00.149 [2024-12-06 18:10:12.104016] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.087 "name": "raid_bdev1", 00:13:01.087 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:13:01.087 "strip_size_kb": 0, 00:13:01.087 "state": "online", 00:13:01.087 "raid_level": "raid1", 00:13:01.087 "superblock": false, 00:13:01.087 "num_base_bdevs": 2, 00:13:01.087 "num_base_bdevs_discovered": 2, 00:13:01.087 "num_base_bdevs_operational": 2, 00:13:01.087 "process": { 00:13:01.087 "type": "rebuild", 00:13:01.087 "target": "spare", 00:13:01.087 "progress": { 00:13:01.087 "blocks": 20480, 00:13:01.087 "percent": 31 00:13:01.087 } 00:13:01.087 }, 00:13:01.087 "base_bdevs_list": [ 00:13:01.087 { 00:13:01.087 "name": "spare", 00:13:01.087 "uuid": "a1c5fe07-cba9-570a-a4c5-a006e59c1c4d", 00:13:01.087 "is_configured": true, 00:13:01.087 "data_offset": 0, 00:13:01.087 "data_size": 65536 00:13:01.087 }, 00:13:01.087 { 00:13:01.087 "name": "BaseBdev2", 00:13:01.087 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:13:01.087 "is_configured": true, 00:13:01.087 "data_offset": 0, 00:13:01.087 "data_size": 65536 00:13:01.087 } 00:13:01.087 ] 00:13:01.087 }' 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.087 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.088 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:01.088 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.088 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.088 [2024-12-06 18:10:13.231523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.347 [2024-12-06 18:10:13.309966] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.347 [2024-12-06 18:10:13.310058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.347 [2024-12-06 18:10:13.310088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.347 [2024-12-06 18:10:13.310099] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.347 "name": "raid_bdev1", 00:13:01.347 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:13:01.347 "strip_size_kb": 0, 00:13:01.347 "state": "online", 00:13:01.347 "raid_level": "raid1", 00:13:01.347 "superblock": false, 00:13:01.347 "num_base_bdevs": 2, 00:13:01.347 "num_base_bdevs_discovered": 1, 00:13:01.347 "num_base_bdevs_operational": 1, 00:13:01.347 "base_bdevs_list": [ 00:13:01.347 { 00:13:01.347 "name": null, 00:13:01.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.347 "is_configured": false, 00:13:01.347 "data_offset": 0, 00:13:01.347 "data_size": 65536 00:13:01.347 }, 00:13:01.347 { 00:13:01.347 "name": "BaseBdev2", 00:13:01.347 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:13:01.347 "is_configured": true, 00:13:01.347 "data_offset": 0, 00:13:01.347 "data_size": 65536 00:13:01.347 } 00:13:01.347 ] 00:13:01.347 }' 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.347 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.916 "name": "raid_bdev1", 00:13:01.916 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:13:01.916 "strip_size_kb": 0, 00:13:01.916 "state": "online", 00:13:01.916 "raid_level": "raid1", 00:13:01.916 "superblock": false, 00:13:01.916 "num_base_bdevs": 2, 00:13:01.916 "num_base_bdevs_discovered": 1, 00:13:01.916 "num_base_bdevs_operational": 1, 00:13:01.916 "base_bdevs_list": [ 00:13:01.916 { 00:13:01.916 "name": null, 00:13:01.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.916 "is_configured": false, 00:13:01.916 "data_offset": 0, 00:13:01.916 "data_size": 65536 00:13:01.916 }, 00:13:01.916 { 00:13:01.916 "name": "BaseBdev2", 00:13:01.916 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:13:01.916 "is_configured": true, 00:13:01.916 "data_offset": 0, 00:13:01.916 "data_size": 65536 00:13:01.916 } 00:13:01.916 ] 00:13:01.916 }' 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.916 18:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.916 [2024-12-06 18:10:13.983130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.916 [2024-12-06 18:10:14.002665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:01.916 18:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.916 18:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:01.916 [2024-12-06 18:10:14.004927] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.853 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.853 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.853 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.853 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.853 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.853 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.853 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.853 18:10:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.853 18:10:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.114 "name": "raid_bdev1", 00:13:03.114 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:13:03.114 "strip_size_kb": 0, 00:13:03.114 "state": "online", 00:13:03.114 "raid_level": "raid1", 00:13:03.114 "superblock": false, 00:13:03.114 "num_base_bdevs": 2, 00:13:03.114 "num_base_bdevs_discovered": 2, 00:13:03.114 "num_base_bdevs_operational": 2, 00:13:03.114 "process": { 00:13:03.114 "type": "rebuild", 00:13:03.114 "target": "spare", 00:13:03.114 "progress": { 00:13:03.114 "blocks": 20480, 00:13:03.114 "percent": 31 00:13:03.114 } 00:13:03.114 }, 00:13:03.114 "base_bdevs_list": [ 00:13:03.114 { 00:13:03.114 "name": "spare", 00:13:03.114 "uuid": "a1c5fe07-cba9-570a-a4c5-a006e59c1c4d", 00:13:03.114 "is_configured": true, 00:13:03.114 "data_offset": 0, 00:13:03.114 "data_size": 65536 00:13:03.114 }, 00:13:03.114 { 00:13:03.114 "name": "BaseBdev2", 00:13:03.114 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:13:03.114 "is_configured": true, 00:13:03.114 "data_offset": 0, 00:13:03.114 "data_size": 65536 00:13:03.114 } 00:13:03.114 ] 00:13:03.114 }' 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=389 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.114 "name": "raid_bdev1", 00:13:03.114 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:13:03.114 "strip_size_kb": 0, 00:13:03.114 "state": "online", 00:13:03.114 "raid_level": "raid1", 00:13:03.114 "superblock": false, 00:13:03.114 "num_base_bdevs": 2, 00:13:03.114 "num_base_bdevs_discovered": 2, 00:13:03.114 "num_base_bdevs_operational": 2, 00:13:03.114 "process": { 00:13:03.114 "type": "rebuild", 00:13:03.114 "target": "spare", 00:13:03.114 "progress": { 00:13:03.114 "blocks": 22528, 00:13:03.114 "percent": 34 00:13:03.114 } 00:13:03.114 }, 00:13:03.114 "base_bdevs_list": [ 00:13:03.114 { 00:13:03.114 "name": "spare", 00:13:03.114 "uuid": "a1c5fe07-cba9-570a-a4c5-a006e59c1c4d", 00:13:03.114 "is_configured": true, 00:13:03.114 "data_offset": 0, 00:13:03.114 "data_size": 65536 00:13:03.114 }, 00:13:03.114 { 00:13:03.114 "name": "BaseBdev2", 00:13:03.114 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:13:03.114 "is_configured": true, 00:13:03.114 "data_offset": 0, 00:13:03.114 "data_size": 65536 00:13:03.114 } 00:13:03.114 ] 00:13:03.114 }' 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.114 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.372 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.373 18:10:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.304 "name": "raid_bdev1", 00:13:04.304 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:13:04.304 "strip_size_kb": 0, 00:13:04.304 "state": "online", 00:13:04.304 "raid_level": "raid1", 00:13:04.304 "superblock": false, 00:13:04.304 "num_base_bdevs": 2, 00:13:04.304 "num_base_bdevs_discovered": 2, 00:13:04.304 "num_base_bdevs_operational": 2, 00:13:04.304 "process": { 00:13:04.304 "type": "rebuild", 00:13:04.304 "target": "spare", 00:13:04.304 "progress": { 00:13:04.304 "blocks": 47104, 00:13:04.304 "percent": 71 00:13:04.304 } 00:13:04.304 }, 00:13:04.304 "base_bdevs_list": [ 00:13:04.304 { 00:13:04.304 "name": "spare", 00:13:04.304 "uuid": "a1c5fe07-cba9-570a-a4c5-a006e59c1c4d", 00:13:04.304 "is_configured": true, 00:13:04.304 "data_offset": 0, 00:13:04.304 "data_size": 65536 00:13:04.304 }, 00:13:04.304 { 00:13:04.304 "name": "BaseBdev2", 00:13:04.304 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:13:04.304 "is_configured": true, 00:13:04.304 "data_offset": 0, 00:13:04.304 "data_size": 65536 00:13:04.304 } 00:13:04.304 ] 00:13:04.304 }' 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.304 18:10:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.237 [2024-12-06 18:10:17.221242] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:05.237 [2024-12-06 18:10:17.221454] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:05.237 [2024-12-06 18:10:17.221521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.494 "name": "raid_bdev1", 00:13:05.494 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:13:05.494 "strip_size_kb": 0, 00:13:05.494 "state": "online", 00:13:05.494 "raid_level": "raid1", 00:13:05.494 "superblock": false, 00:13:05.494 "num_base_bdevs": 2, 00:13:05.494 "num_base_bdevs_discovered": 2, 00:13:05.494 "num_base_bdevs_operational": 2, 00:13:05.494 "base_bdevs_list": [ 00:13:05.494 { 00:13:05.494 "name": "spare", 00:13:05.494 "uuid": "a1c5fe07-cba9-570a-a4c5-a006e59c1c4d", 00:13:05.494 "is_configured": true, 00:13:05.494 "data_offset": 0, 00:13:05.494 "data_size": 65536 00:13:05.494 }, 00:13:05.494 { 00:13:05.494 "name": "BaseBdev2", 00:13:05.494 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:13:05.494 "is_configured": true, 00:13:05.494 "data_offset": 0, 00:13:05.494 "data_size": 65536 00:13:05.494 } 00:13:05.494 ] 00:13:05.494 }' 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.494 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.752 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.752 "name": "raid_bdev1", 00:13:05.752 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:13:05.752 "strip_size_kb": 0, 00:13:05.752 "state": "online", 00:13:05.752 "raid_level": "raid1", 00:13:05.752 "superblock": false, 00:13:05.752 "num_base_bdevs": 2, 00:13:05.752 "num_base_bdevs_discovered": 2, 00:13:05.752 "num_base_bdevs_operational": 2, 00:13:05.752 "base_bdevs_list": [ 00:13:05.752 { 00:13:05.752 "name": "spare", 00:13:05.752 "uuid": "a1c5fe07-cba9-570a-a4c5-a006e59c1c4d", 00:13:05.752 "is_configured": true, 00:13:05.752 "data_offset": 0, 00:13:05.752 "data_size": 65536 00:13:05.752 }, 00:13:05.752 { 00:13:05.752 "name": "BaseBdev2", 00:13:05.752 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:13:05.752 "is_configured": true, 00:13:05.752 "data_offset": 0, 00:13:05.752 "data_size": 65536 00:13:05.752 } 00:13:05.752 ] 00:13:05.752 }' 00:13:05.752 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.752 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.753 "name": "raid_bdev1", 00:13:05.753 "uuid": "040ab645-104d-489e-ba3b-d7c6970d4c50", 00:13:05.753 "strip_size_kb": 0, 00:13:05.753 "state": "online", 00:13:05.753 "raid_level": "raid1", 00:13:05.753 "superblock": false, 00:13:05.753 "num_base_bdevs": 2, 00:13:05.753 "num_base_bdevs_discovered": 2, 00:13:05.753 "num_base_bdevs_operational": 2, 00:13:05.753 "base_bdevs_list": [ 00:13:05.753 { 00:13:05.753 "name": "spare", 00:13:05.753 "uuid": "a1c5fe07-cba9-570a-a4c5-a006e59c1c4d", 00:13:05.753 "is_configured": true, 00:13:05.753 "data_offset": 0, 00:13:05.753 "data_size": 65536 00:13:05.753 }, 00:13:05.753 { 00:13:05.753 "name": "BaseBdev2", 00:13:05.753 "uuid": "d9e58254-3f14-57d7-b1ac-afded9c3e322", 00:13:05.753 "is_configured": true, 00:13:05.753 "data_offset": 0, 00:13:05.753 "data_size": 65536 00:13:05.753 } 00:13:05.753 ] 00:13:05.753 }' 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.753 18:10:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.338 [2024-12-06 18:10:18.232174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.338 [2024-12-06 18:10:18.232212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.338 [2024-12-06 18:10:18.232307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.338 [2024-12-06 18:10:18.232387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.338 [2024-12-06 18:10:18.232399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.338 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:06.598 /dev/nbd0 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.598 1+0 records in 00:13:06.598 1+0 records out 00:13:06.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515783 s, 7.9 MB/s 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.598 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:06.857 /dev/nbd1 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.857 1+0 records in 00:13:06.857 1+0 records out 00:13:06.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426574 s, 9.6 MB/s 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.857 18:10:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:06.858 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.858 18:10:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.858 18:10:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:07.116 18:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:07.116 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.116 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:07.116 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.116 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:07.116 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.116 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.376 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75807 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75807 ']' 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75807 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75807 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.637 killing process with pid 75807 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75807' 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75807 00:13:07.637 Received shutdown signal, test time was about 60.000000 seconds 00:13:07.637 00:13:07.637 Latency(us) 00:13:07.637 [2024-12-06T18:10:19.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.637 [2024-12-06T18:10:19.805Z] =================================================================================================================== 00:13:07.637 [2024-12-06T18:10:19.805Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:07.637 [2024-12-06 18:10:19.637497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.637 18:10:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75807 00:13:07.897 [2024-12-06 18:10:19.975844] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:09.313 00:13:09.313 real 0m16.258s 00:13:09.313 user 0m18.771s 00:13:09.313 sys 0m2.969s 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.313 ************************************ 00:13:09.313 END TEST raid_rebuild_test 00:13:09.313 ************************************ 00:13:09.313 18:10:21 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:09.313 18:10:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:09.313 18:10:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.313 18:10:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.313 ************************************ 00:13:09.313 START TEST raid_rebuild_test_sb 00:13:09.313 ************************************ 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:09.313 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76235 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76235 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76235 ']' 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.314 18:10:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:09.314 [2024-12-06 18:10:21.473465] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:13:09.314 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:09.314 Zero copy mechanism will not be used. 00:13:09.314 [2024-12-06 18:10:21.473599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76235 ] 00:13:09.573 [2024-12-06 18:10:21.641451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.832 [2024-12-06 18:10:21.801621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.110 [2024-12-06 18:10:22.042766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.110 [2024-12-06 18:10:22.042849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.378 BaseBdev1_malloc 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.378 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.379 [2024-12-06 18:10:22.440860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:10.379 [2024-12-06 18:10:22.440947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.379 [2024-12-06 18:10:22.440975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:10.379 [2024-12-06 18:10:22.440988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.379 [2024-12-06 18:10:22.443593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.379 [2024-12-06 18:10:22.443644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:10.379 BaseBdev1 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.379 BaseBdev2_malloc 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.379 [2024-12-06 18:10:22.496049] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:10.379 [2024-12-06 18:10:22.496161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.379 [2024-12-06 18:10:22.496199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:10.379 [2024-12-06 18:10:22.496216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.379 [2024-12-06 18:10:22.498756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.379 [2024-12-06 18:10:22.498811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:10.379 BaseBdev2 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.379 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.638 spare_malloc 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.638 spare_delay 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.638 [2024-12-06 18:10:22.582007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.638 [2024-12-06 18:10:22.582106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.638 [2024-12-06 18:10:22.582137] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:10.638 [2024-12-06 18:10:22.582150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.638 [2024-12-06 18:10:22.584772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.638 [2024-12-06 18:10:22.584827] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.638 spare 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.638 [2024-12-06 18:10:22.590054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.638 [2024-12-06 18:10:22.592234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.638 [2024-12-06 18:10:22.592468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:10.638 [2024-12-06 18:10:22.592491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.638 [2024-12-06 18:10:22.592815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:10.638 [2024-12-06 18:10:22.593025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:10.638 [2024-12-06 18:10:22.593042] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:10.638 [2024-12-06 18:10:22.593267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.638 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.638 "name": "raid_bdev1", 00:13:10.638 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:10.638 "strip_size_kb": 0, 00:13:10.638 "state": "online", 00:13:10.638 "raid_level": "raid1", 00:13:10.638 "superblock": true, 00:13:10.638 "num_base_bdevs": 2, 00:13:10.638 "num_base_bdevs_discovered": 2, 00:13:10.638 "num_base_bdevs_operational": 2, 00:13:10.638 "base_bdevs_list": [ 00:13:10.638 { 00:13:10.638 "name": "BaseBdev1", 00:13:10.638 "uuid": "948cc893-9aa9-5447-866e-de1aa61621b6", 00:13:10.638 "is_configured": true, 00:13:10.638 "data_offset": 2048, 00:13:10.638 "data_size": 63488 00:13:10.638 }, 00:13:10.638 { 00:13:10.638 "name": "BaseBdev2", 00:13:10.638 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:10.638 "is_configured": true, 00:13:10.638 "data_offset": 2048, 00:13:10.638 "data_size": 63488 00:13:10.638 } 00:13:10.638 ] 00:13:10.639 }' 00:13:10.639 18:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.639 18:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.897 [2024-12-06 18:10:23.025647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.897 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.155 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:11.414 [2024-12-06 18:10:23.341167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:11.414 /dev/nbd0 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.414 1+0 records in 00:13:11.414 1+0 records out 00:13:11.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479214 s, 8.5 MB/s 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:11.414 18:10:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:16.748 63488+0 records in 00:13:16.748 63488+0 records out 00:13:16.748 32505856 bytes (33 MB, 31 MiB) copied, 4.80471 s, 6.8 MB/s 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.748 [2024-12-06 18:10:28.473567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.748 [2024-12-06 18:10:28.485883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.748 "name": "raid_bdev1", 00:13:16.748 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:16.748 "strip_size_kb": 0, 00:13:16.748 "state": "online", 00:13:16.748 "raid_level": "raid1", 00:13:16.748 "superblock": true, 00:13:16.748 "num_base_bdevs": 2, 00:13:16.748 "num_base_bdevs_discovered": 1, 00:13:16.748 "num_base_bdevs_operational": 1, 00:13:16.748 "base_bdevs_list": [ 00:13:16.748 { 00:13:16.748 "name": null, 00:13:16.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.748 "is_configured": false, 00:13:16.748 "data_offset": 0, 00:13:16.748 "data_size": 63488 00:13:16.748 }, 00:13:16.748 { 00:13:16.748 "name": "BaseBdev2", 00:13:16.748 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:16.748 "is_configured": true, 00:13:16.748 "data_offset": 2048, 00:13:16.748 "data_size": 63488 00:13:16.748 } 00:13:16.748 ] 00:13:16.748 }' 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.748 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.007 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:17.007 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.007 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.007 [2024-12-06 18:10:28.925205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.007 [2024-12-06 18:10:28.945476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:17.007 18:10:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.007 18:10:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:17.007 [2024-12-06 18:10:28.947751] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.943 18:10:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.943 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.943 "name": "raid_bdev1", 00:13:17.943 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:17.943 "strip_size_kb": 0, 00:13:17.943 "state": "online", 00:13:17.943 "raid_level": "raid1", 00:13:17.943 "superblock": true, 00:13:17.943 "num_base_bdevs": 2, 00:13:17.943 "num_base_bdevs_discovered": 2, 00:13:17.943 "num_base_bdevs_operational": 2, 00:13:17.943 "process": { 00:13:17.943 "type": "rebuild", 00:13:17.943 "target": "spare", 00:13:17.943 "progress": { 00:13:17.943 "blocks": 20480, 00:13:17.943 "percent": 32 00:13:17.943 } 00:13:17.943 }, 00:13:17.943 "base_bdevs_list": [ 00:13:17.943 { 00:13:17.943 "name": "spare", 00:13:17.943 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:17.943 "is_configured": true, 00:13:17.943 "data_offset": 2048, 00:13:17.943 "data_size": 63488 00:13:17.943 }, 00:13:17.943 { 00:13:17.943 "name": "BaseBdev2", 00:13:17.943 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:17.943 "is_configured": true, 00:13:17.943 "data_offset": 2048, 00:13:17.943 "data_size": 63488 00:13:17.943 } 00:13:17.943 ] 00:13:17.943 }' 00:13:17.943 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.943 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.943 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.943 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.943 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.943 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.943 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.943 [2024-12-06 18:10:30.099237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.200 [2024-12-06 18:10:30.154511] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:18.200 [2024-12-06 18:10:30.154624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.200 [2024-12-06 18:10:30.154643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.201 [2024-12-06 18:10:30.154655] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.201 "name": "raid_bdev1", 00:13:18.201 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:18.201 "strip_size_kb": 0, 00:13:18.201 "state": "online", 00:13:18.201 "raid_level": "raid1", 00:13:18.201 "superblock": true, 00:13:18.201 "num_base_bdevs": 2, 00:13:18.201 "num_base_bdevs_discovered": 1, 00:13:18.201 "num_base_bdevs_operational": 1, 00:13:18.201 "base_bdevs_list": [ 00:13:18.201 { 00:13:18.201 "name": null, 00:13:18.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.201 "is_configured": false, 00:13:18.201 "data_offset": 0, 00:13:18.201 "data_size": 63488 00:13:18.201 }, 00:13:18.201 { 00:13:18.201 "name": "BaseBdev2", 00:13:18.201 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:18.201 "is_configured": true, 00:13:18.201 "data_offset": 2048, 00:13:18.201 "data_size": 63488 00:13:18.201 } 00:13:18.201 ] 00:13:18.201 }' 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.201 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.772 "name": "raid_bdev1", 00:13:18.772 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:18.772 "strip_size_kb": 0, 00:13:18.772 "state": "online", 00:13:18.772 "raid_level": "raid1", 00:13:18.772 "superblock": true, 00:13:18.772 "num_base_bdevs": 2, 00:13:18.772 "num_base_bdevs_discovered": 1, 00:13:18.772 "num_base_bdevs_operational": 1, 00:13:18.772 "base_bdevs_list": [ 00:13:18.772 { 00:13:18.772 "name": null, 00:13:18.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.772 "is_configured": false, 00:13:18.772 "data_offset": 0, 00:13:18.772 "data_size": 63488 00:13:18.772 }, 00:13:18.772 { 00:13:18.772 "name": "BaseBdev2", 00:13:18.772 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:18.772 "is_configured": true, 00:13:18.772 "data_offset": 2048, 00:13:18.772 "data_size": 63488 00:13:18.772 } 00:13:18.772 ] 00:13:18.772 }' 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.772 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.773 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.773 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.773 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.773 [2024-12-06 18:10:30.795608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.773 [2024-12-06 18:10:30.815222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:18.773 18:10:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.773 18:10:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:18.773 [2024-12-06 18:10:30.817473] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.710 "name": "raid_bdev1", 00:13:19.710 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:19.710 "strip_size_kb": 0, 00:13:19.710 "state": "online", 00:13:19.710 "raid_level": "raid1", 00:13:19.710 "superblock": true, 00:13:19.710 "num_base_bdevs": 2, 00:13:19.710 "num_base_bdevs_discovered": 2, 00:13:19.710 "num_base_bdevs_operational": 2, 00:13:19.710 "process": { 00:13:19.710 "type": "rebuild", 00:13:19.710 "target": "spare", 00:13:19.710 "progress": { 00:13:19.710 "blocks": 20480, 00:13:19.710 "percent": 32 00:13:19.710 } 00:13:19.710 }, 00:13:19.710 "base_bdevs_list": [ 00:13:19.710 { 00:13:19.710 "name": "spare", 00:13:19.710 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:19.710 "is_configured": true, 00:13:19.710 "data_offset": 2048, 00:13:19.710 "data_size": 63488 00:13:19.710 }, 00:13:19.710 { 00:13:19.710 "name": "BaseBdev2", 00:13:19.710 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:19.710 "is_configured": true, 00:13:19.710 "data_offset": 2048, 00:13:19.710 "data_size": 63488 00:13:19.710 } 00:13:19.710 ] 00:13:19.710 }' 00:13:19.710 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:19.969 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=405 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.969 18:10:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.969 18:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.969 "name": "raid_bdev1", 00:13:19.969 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:19.969 "strip_size_kb": 0, 00:13:19.969 "state": "online", 00:13:19.969 "raid_level": "raid1", 00:13:19.969 "superblock": true, 00:13:19.969 "num_base_bdevs": 2, 00:13:19.969 "num_base_bdevs_discovered": 2, 00:13:19.969 "num_base_bdevs_operational": 2, 00:13:19.969 "process": { 00:13:19.969 "type": "rebuild", 00:13:19.969 "target": "spare", 00:13:19.969 "progress": { 00:13:19.969 "blocks": 22528, 00:13:19.969 "percent": 35 00:13:19.969 } 00:13:19.969 }, 00:13:19.969 "base_bdevs_list": [ 00:13:19.969 { 00:13:19.969 "name": "spare", 00:13:19.969 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:19.969 "is_configured": true, 00:13:19.969 "data_offset": 2048, 00:13:19.969 "data_size": 63488 00:13:19.969 }, 00:13:19.969 { 00:13:19.969 "name": "BaseBdev2", 00:13:19.969 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:19.969 "is_configured": true, 00:13:19.969 "data_offset": 2048, 00:13:19.969 "data_size": 63488 00:13:19.969 } 00:13:19.969 ] 00:13:19.969 }' 00:13:19.969 18:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.969 18:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.969 18:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.969 18:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.969 18:10:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.346 "name": "raid_bdev1", 00:13:21.346 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:21.346 "strip_size_kb": 0, 00:13:21.346 "state": "online", 00:13:21.346 "raid_level": "raid1", 00:13:21.346 "superblock": true, 00:13:21.346 "num_base_bdevs": 2, 00:13:21.346 "num_base_bdevs_discovered": 2, 00:13:21.346 "num_base_bdevs_operational": 2, 00:13:21.346 "process": { 00:13:21.346 "type": "rebuild", 00:13:21.346 "target": "spare", 00:13:21.346 "progress": { 00:13:21.346 "blocks": 45056, 00:13:21.346 "percent": 70 00:13:21.346 } 00:13:21.346 }, 00:13:21.346 "base_bdevs_list": [ 00:13:21.346 { 00:13:21.346 "name": "spare", 00:13:21.346 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:21.346 "is_configured": true, 00:13:21.346 "data_offset": 2048, 00:13:21.346 "data_size": 63488 00:13:21.346 }, 00:13:21.346 { 00:13:21.346 "name": "BaseBdev2", 00:13:21.346 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:21.346 "is_configured": true, 00:13:21.346 "data_offset": 2048, 00:13:21.346 "data_size": 63488 00:13:21.346 } 00:13:21.346 ] 00:13:21.346 }' 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.346 18:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.912 [2024-12-06 18:10:33.933901] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.912 [2024-12-06 18:10:33.934008] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.912 [2024-12-06 18:10:33.934186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.170 "name": "raid_bdev1", 00:13:22.170 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:22.170 "strip_size_kb": 0, 00:13:22.170 "state": "online", 00:13:22.170 "raid_level": "raid1", 00:13:22.170 "superblock": true, 00:13:22.170 "num_base_bdevs": 2, 00:13:22.170 "num_base_bdevs_discovered": 2, 00:13:22.170 "num_base_bdevs_operational": 2, 00:13:22.170 "base_bdevs_list": [ 00:13:22.170 { 00:13:22.170 "name": "spare", 00:13:22.170 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:22.170 "is_configured": true, 00:13:22.170 "data_offset": 2048, 00:13:22.170 "data_size": 63488 00:13:22.170 }, 00:13:22.170 { 00:13:22.170 "name": "BaseBdev2", 00:13:22.170 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:22.170 "is_configured": true, 00:13:22.170 "data_offset": 2048, 00:13:22.170 "data_size": 63488 00:13:22.170 } 00:13:22.170 ] 00:13:22.170 }' 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:22.170 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.428 "name": "raid_bdev1", 00:13:22.428 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:22.428 "strip_size_kb": 0, 00:13:22.428 "state": "online", 00:13:22.428 "raid_level": "raid1", 00:13:22.428 "superblock": true, 00:13:22.428 "num_base_bdevs": 2, 00:13:22.428 "num_base_bdevs_discovered": 2, 00:13:22.428 "num_base_bdevs_operational": 2, 00:13:22.428 "base_bdevs_list": [ 00:13:22.428 { 00:13:22.428 "name": "spare", 00:13:22.428 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:22.428 "is_configured": true, 00:13:22.428 "data_offset": 2048, 00:13:22.428 "data_size": 63488 00:13:22.428 }, 00:13:22.428 { 00:13:22.428 "name": "BaseBdev2", 00:13:22.428 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:22.428 "is_configured": true, 00:13:22.428 "data_offset": 2048, 00:13:22.428 "data_size": 63488 00:13:22.428 } 00:13:22.428 ] 00:13:22.428 }' 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:22.428 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.429 "name": "raid_bdev1", 00:13:22.429 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:22.429 "strip_size_kb": 0, 00:13:22.429 "state": "online", 00:13:22.429 "raid_level": "raid1", 00:13:22.429 "superblock": true, 00:13:22.429 "num_base_bdevs": 2, 00:13:22.429 "num_base_bdevs_discovered": 2, 00:13:22.429 "num_base_bdevs_operational": 2, 00:13:22.429 "base_bdevs_list": [ 00:13:22.429 { 00:13:22.429 "name": "spare", 00:13:22.429 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:22.429 "is_configured": true, 00:13:22.429 "data_offset": 2048, 00:13:22.429 "data_size": 63488 00:13:22.429 }, 00:13:22.429 { 00:13:22.429 "name": "BaseBdev2", 00:13:22.429 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:22.429 "is_configured": true, 00:13:22.429 "data_offset": 2048, 00:13:22.429 "data_size": 63488 00:13:22.429 } 00:13:22.429 ] 00:13:22.429 }' 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.429 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.995 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.995 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.995 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.995 [2024-12-06 18:10:34.991494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.995 [2024-12-06 18:10:34.991534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.995 [2024-12-06 18:10:34.991638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.995 [2024-12-06 18:10:34.991725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.995 [2024-12-06 18:10:34.991745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:22.995 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.995 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.995 18:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.995 18:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.995 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:23.254 /dev/nbd0 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.254 1+0 records in 00:13:23.254 1+0 records out 00:13:23.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227461 s, 18.0 MB/s 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:23.254 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:23.512 /dev/nbd1 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.512 1+0 records in 00:13:23.512 1+0 records out 00:13:23.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336007 s, 12.2 MB/s 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:23.512 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:23.769 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:23.769 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.769 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:23.769 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.769 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:23.769 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.769 18:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.031 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.294 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.294 [2024-12-06 18:10:36.457951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:24.294 [2024-12-06 18:10:36.458029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.294 [2024-12-06 18:10:36.458060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:24.294 [2024-12-06 18:10:36.458084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.552 [2024-12-06 18:10:36.460759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.552 [2024-12-06 18:10:36.460809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:24.552 [2024-12-06 18:10:36.460942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:24.552 [2024-12-06 18:10:36.461017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.552 [2024-12-06 18:10:36.461205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.552 spare 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 [2024-12-06 18:10:36.561145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:24.552 [2024-12-06 18:10:36.561224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.552 [2024-12-06 18:10:36.561635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:24.552 [2024-12-06 18:10:36.561910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:24.552 [2024-12-06 18:10:36.561935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:24.552 [2024-12-06 18:10:36.562213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.552 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.552 "name": "raid_bdev1", 00:13:24.552 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:24.552 "strip_size_kb": 0, 00:13:24.552 "state": "online", 00:13:24.552 "raid_level": "raid1", 00:13:24.552 "superblock": true, 00:13:24.552 "num_base_bdevs": 2, 00:13:24.552 "num_base_bdevs_discovered": 2, 00:13:24.552 "num_base_bdevs_operational": 2, 00:13:24.552 "base_bdevs_list": [ 00:13:24.552 { 00:13:24.552 "name": "spare", 00:13:24.552 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:24.552 "is_configured": true, 00:13:24.552 "data_offset": 2048, 00:13:24.552 "data_size": 63488 00:13:24.552 }, 00:13:24.552 { 00:13:24.552 "name": "BaseBdev2", 00:13:24.552 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:24.552 "is_configured": true, 00:13:24.552 "data_offset": 2048, 00:13:24.552 "data_size": 63488 00:13:24.552 } 00:13:24.552 ] 00:13:24.552 }' 00:13:24.553 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.553 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.120 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.120 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.120 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.120 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.120 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.120 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.120 18:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.120 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.120 18:10:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.120 "name": "raid_bdev1", 00:13:25.120 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:25.120 "strip_size_kb": 0, 00:13:25.120 "state": "online", 00:13:25.120 "raid_level": "raid1", 00:13:25.120 "superblock": true, 00:13:25.120 "num_base_bdevs": 2, 00:13:25.120 "num_base_bdevs_discovered": 2, 00:13:25.120 "num_base_bdevs_operational": 2, 00:13:25.120 "base_bdevs_list": [ 00:13:25.120 { 00:13:25.120 "name": "spare", 00:13:25.120 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:25.120 "is_configured": true, 00:13:25.120 "data_offset": 2048, 00:13:25.120 "data_size": 63488 00:13:25.120 }, 00:13:25.120 { 00:13:25.120 "name": "BaseBdev2", 00:13:25.120 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:25.120 "is_configured": true, 00:13:25.120 "data_offset": 2048, 00:13:25.120 "data_size": 63488 00:13:25.120 } 00:13:25.120 ] 00:13:25.120 }' 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.120 [2024-12-06 18:10:37.173181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.120 "name": "raid_bdev1", 00:13:25.120 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:25.120 "strip_size_kb": 0, 00:13:25.120 "state": "online", 00:13:25.120 "raid_level": "raid1", 00:13:25.120 "superblock": true, 00:13:25.120 "num_base_bdevs": 2, 00:13:25.120 "num_base_bdevs_discovered": 1, 00:13:25.120 "num_base_bdevs_operational": 1, 00:13:25.120 "base_bdevs_list": [ 00:13:25.120 { 00:13:25.120 "name": null, 00:13:25.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.120 "is_configured": false, 00:13:25.120 "data_offset": 0, 00:13:25.120 "data_size": 63488 00:13:25.120 }, 00:13:25.120 { 00:13:25.120 "name": "BaseBdev2", 00:13:25.120 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:25.120 "is_configured": true, 00:13:25.120 "data_offset": 2048, 00:13:25.120 "data_size": 63488 00:13:25.120 } 00:13:25.120 ] 00:13:25.120 }' 00:13:25.120 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.121 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.688 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.688 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.688 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.688 [2024-12-06 18:10:37.652388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.688 [2024-12-06 18:10:37.652651] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:25.688 [2024-12-06 18:10:37.652680] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:25.688 [2024-12-06 18:10:37.652725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.688 [2024-12-06 18:10:37.671924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:25.688 18:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.688 18:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:25.688 [2024-12-06 18:10:37.674206] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.624 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.624 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.624 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.624 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.624 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.624 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.624 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.624 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.625 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.625 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.625 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.625 "name": "raid_bdev1", 00:13:26.625 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:26.625 "strip_size_kb": 0, 00:13:26.625 "state": "online", 00:13:26.625 "raid_level": "raid1", 00:13:26.625 "superblock": true, 00:13:26.625 "num_base_bdevs": 2, 00:13:26.625 "num_base_bdevs_discovered": 2, 00:13:26.625 "num_base_bdevs_operational": 2, 00:13:26.625 "process": { 00:13:26.625 "type": "rebuild", 00:13:26.625 "target": "spare", 00:13:26.625 "progress": { 00:13:26.625 "blocks": 20480, 00:13:26.625 "percent": 32 00:13:26.625 } 00:13:26.625 }, 00:13:26.625 "base_bdevs_list": [ 00:13:26.625 { 00:13:26.625 "name": "spare", 00:13:26.625 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:26.625 "is_configured": true, 00:13:26.625 "data_offset": 2048, 00:13:26.625 "data_size": 63488 00:13:26.625 }, 00:13:26.625 { 00:13:26.625 "name": "BaseBdev2", 00:13:26.625 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:26.625 "is_configured": true, 00:13:26.625 "data_offset": 2048, 00:13:26.625 "data_size": 63488 00:13:26.625 } 00:13:26.625 ] 00:13:26.625 }' 00:13:26.625 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.625 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.625 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.884 [2024-12-06 18:10:38.833507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.884 [2024-12-06 18:10:38.880554] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:26.884 [2024-12-06 18:10:38.880667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.884 [2024-12-06 18:10:38.880686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.884 [2024-12-06 18:10:38.880697] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.884 "name": "raid_bdev1", 00:13:26.884 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:26.884 "strip_size_kb": 0, 00:13:26.884 "state": "online", 00:13:26.884 "raid_level": "raid1", 00:13:26.884 "superblock": true, 00:13:26.884 "num_base_bdevs": 2, 00:13:26.884 "num_base_bdevs_discovered": 1, 00:13:26.884 "num_base_bdevs_operational": 1, 00:13:26.884 "base_bdevs_list": [ 00:13:26.884 { 00:13:26.884 "name": null, 00:13:26.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.884 "is_configured": false, 00:13:26.884 "data_offset": 0, 00:13:26.884 "data_size": 63488 00:13:26.884 }, 00:13:26.884 { 00:13:26.884 "name": "BaseBdev2", 00:13:26.884 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:26.884 "is_configured": true, 00:13:26.884 "data_offset": 2048, 00:13:26.884 "data_size": 63488 00:13:26.884 } 00:13:26.884 ] 00:13:26.884 }' 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.884 18:10:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.453 18:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.453 18:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.453 18:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.453 [2024-12-06 18:10:39.375720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.453 [2024-12-06 18:10:39.375810] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.453 [2024-12-06 18:10:39.375835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:27.453 [2024-12-06 18:10:39.375848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.453 [2024-12-06 18:10:39.376405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.453 [2024-12-06 18:10:39.376443] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.453 [2024-12-06 18:10:39.376560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:27.453 [2024-12-06 18:10:39.376586] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:27.453 [2024-12-06 18:10:39.376598] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:27.453 [2024-12-06 18:10:39.376634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.453 [2024-12-06 18:10:39.396586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:27.453 spare 00:13:27.453 18:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.453 18:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:27.453 [2024-12-06 18:10:39.398816] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.401 "name": "raid_bdev1", 00:13:28.401 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:28.401 "strip_size_kb": 0, 00:13:28.401 "state": "online", 00:13:28.401 "raid_level": "raid1", 00:13:28.401 "superblock": true, 00:13:28.401 "num_base_bdevs": 2, 00:13:28.401 "num_base_bdevs_discovered": 2, 00:13:28.401 "num_base_bdevs_operational": 2, 00:13:28.401 "process": { 00:13:28.401 "type": "rebuild", 00:13:28.401 "target": "spare", 00:13:28.401 "progress": { 00:13:28.401 "blocks": 20480, 00:13:28.401 "percent": 32 00:13:28.401 } 00:13:28.401 }, 00:13:28.401 "base_bdevs_list": [ 00:13:28.401 { 00:13:28.401 "name": "spare", 00:13:28.401 "uuid": "342bb29d-9386-5029-ace7-71eab4fbeff7", 00:13:28.401 "is_configured": true, 00:13:28.401 "data_offset": 2048, 00:13:28.401 "data_size": 63488 00:13:28.401 }, 00:13:28.401 { 00:13:28.401 "name": "BaseBdev2", 00:13:28.401 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:28.401 "is_configured": true, 00:13:28.401 "data_offset": 2048, 00:13:28.401 "data_size": 63488 00:13:28.401 } 00:13:28.401 ] 00:13:28.401 }' 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.401 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.401 [2024-12-06 18:10:40.533754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.660 [2024-12-06 18:10:40.605348] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.660 [2024-12-06 18:10:40.605455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.660 [2024-12-06 18:10:40.605479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.660 [2024-12-06 18:10:40.605488] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.660 "name": "raid_bdev1", 00:13:28.660 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:28.660 "strip_size_kb": 0, 00:13:28.660 "state": "online", 00:13:28.660 "raid_level": "raid1", 00:13:28.660 "superblock": true, 00:13:28.660 "num_base_bdevs": 2, 00:13:28.660 "num_base_bdevs_discovered": 1, 00:13:28.660 "num_base_bdevs_operational": 1, 00:13:28.660 "base_bdevs_list": [ 00:13:28.660 { 00:13:28.660 "name": null, 00:13:28.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.660 "is_configured": false, 00:13:28.660 "data_offset": 0, 00:13:28.660 "data_size": 63488 00:13:28.660 }, 00:13:28.660 { 00:13:28.660 "name": "BaseBdev2", 00:13:28.660 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:28.660 "is_configured": true, 00:13:28.660 "data_offset": 2048, 00:13:28.660 "data_size": 63488 00:13:28.660 } 00:13:28.660 ] 00:13:28.660 }' 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.660 18:10:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.919 18:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.178 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.178 "name": "raid_bdev1", 00:13:29.178 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:29.178 "strip_size_kb": 0, 00:13:29.178 "state": "online", 00:13:29.178 "raid_level": "raid1", 00:13:29.178 "superblock": true, 00:13:29.178 "num_base_bdevs": 2, 00:13:29.178 "num_base_bdevs_discovered": 1, 00:13:29.178 "num_base_bdevs_operational": 1, 00:13:29.178 "base_bdevs_list": [ 00:13:29.178 { 00:13:29.178 "name": null, 00:13:29.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.178 "is_configured": false, 00:13:29.178 "data_offset": 0, 00:13:29.178 "data_size": 63488 00:13:29.178 }, 00:13:29.178 { 00:13:29.178 "name": "BaseBdev2", 00:13:29.178 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:29.178 "is_configured": true, 00:13:29.178 "data_offset": 2048, 00:13:29.178 "data_size": 63488 00:13:29.178 } 00:13:29.178 ] 00:13:29.178 }' 00:13:29.178 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.178 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.178 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.178 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.178 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:29.178 18:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.178 18:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.179 18:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.179 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.179 18:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.179 18:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.179 [2024-12-06 18:10:41.217642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.179 [2024-12-06 18:10:41.217721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.179 [2024-12-06 18:10:41.217758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:29.179 [2024-12-06 18:10:41.217783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.179 [2024-12-06 18:10:41.218355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.179 [2024-12-06 18:10:41.218390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.179 [2024-12-06 18:10:41.218498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:29.179 [2024-12-06 18:10:41.218518] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:29.179 [2024-12-06 18:10:41.218530] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:29.179 [2024-12-06 18:10:41.218542] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:29.179 BaseBdev1 00:13:29.179 18:10:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.179 18:10:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.116 "name": "raid_bdev1", 00:13:30.116 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:30.116 "strip_size_kb": 0, 00:13:30.116 "state": "online", 00:13:30.116 "raid_level": "raid1", 00:13:30.116 "superblock": true, 00:13:30.116 "num_base_bdevs": 2, 00:13:30.116 "num_base_bdevs_discovered": 1, 00:13:30.116 "num_base_bdevs_operational": 1, 00:13:30.116 "base_bdevs_list": [ 00:13:30.116 { 00:13:30.116 "name": null, 00:13:30.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.116 "is_configured": false, 00:13:30.116 "data_offset": 0, 00:13:30.116 "data_size": 63488 00:13:30.116 }, 00:13:30.116 { 00:13:30.116 "name": "BaseBdev2", 00:13:30.116 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:30.116 "is_configured": true, 00:13:30.116 "data_offset": 2048, 00:13:30.116 "data_size": 63488 00:13:30.116 } 00:13:30.116 ] 00:13:30.116 }' 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.116 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.683 "name": "raid_bdev1", 00:13:30.683 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:30.683 "strip_size_kb": 0, 00:13:30.683 "state": "online", 00:13:30.683 "raid_level": "raid1", 00:13:30.683 "superblock": true, 00:13:30.683 "num_base_bdevs": 2, 00:13:30.683 "num_base_bdevs_discovered": 1, 00:13:30.683 "num_base_bdevs_operational": 1, 00:13:30.683 "base_bdevs_list": [ 00:13:30.683 { 00:13:30.683 "name": null, 00:13:30.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.683 "is_configured": false, 00:13:30.683 "data_offset": 0, 00:13:30.683 "data_size": 63488 00:13:30.683 }, 00:13:30.683 { 00:13:30.683 "name": "BaseBdev2", 00:13:30.683 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:30.683 "is_configured": true, 00:13:30.683 "data_offset": 2048, 00:13:30.683 "data_size": 63488 00:13:30.683 } 00:13:30.683 ] 00:13:30.683 }' 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:30.683 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.684 [2024-12-06 18:10:42.835653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.684 [2024-12-06 18:10:42.835894] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:30.684 [2024-12-06 18:10:42.835928] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:30.684 request: 00:13:30.684 { 00:13:30.684 "base_bdev": "BaseBdev1", 00:13:30.684 "raid_bdev": "raid_bdev1", 00:13:30.684 "method": "bdev_raid_add_base_bdev", 00:13:30.684 "req_id": 1 00:13:30.684 } 00:13:30.684 Got JSON-RPC error response 00:13:30.684 response: 00:13:30.684 { 00:13:30.684 "code": -22, 00:13:30.684 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:30.684 } 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.684 18:10:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.061 "name": "raid_bdev1", 00:13:32.061 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:32.061 "strip_size_kb": 0, 00:13:32.061 "state": "online", 00:13:32.061 "raid_level": "raid1", 00:13:32.061 "superblock": true, 00:13:32.061 "num_base_bdevs": 2, 00:13:32.061 "num_base_bdevs_discovered": 1, 00:13:32.061 "num_base_bdevs_operational": 1, 00:13:32.061 "base_bdevs_list": [ 00:13:32.061 { 00:13:32.061 "name": null, 00:13:32.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.061 "is_configured": false, 00:13:32.061 "data_offset": 0, 00:13:32.061 "data_size": 63488 00:13:32.061 }, 00:13:32.061 { 00:13:32.061 "name": "BaseBdev2", 00:13:32.061 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:32.061 "is_configured": true, 00:13:32.061 "data_offset": 2048, 00:13:32.061 "data_size": 63488 00:13:32.061 } 00:13:32.061 ] 00:13:32.061 }' 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.061 18:10:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.321 "name": "raid_bdev1", 00:13:32.321 "uuid": "769a768a-0ab7-4b98-b962-47bd49f58f3c", 00:13:32.321 "strip_size_kb": 0, 00:13:32.321 "state": "online", 00:13:32.321 "raid_level": "raid1", 00:13:32.321 "superblock": true, 00:13:32.321 "num_base_bdevs": 2, 00:13:32.321 "num_base_bdevs_discovered": 1, 00:13:32.321 "num_base_bdevs_operational": 1, 00:13:32.321 "base_bdevs_list": [ 00:13:32.321 { 00:13:32.321 "name": null, 00:13:32.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.321 "is_configured": false, 00:13:32.321 "data_offset": 0, 00:13:32.321 "data_size": 63488 00:13:32.321 }, 00:13:32.321 { 00:13:32.321 "name": "BaseBdev2", 00:13:32.321 "uuid": "c2fcad49-eca7-511e-8f2c-8af2e5fc32c9", 00:13:32.321 "is_configured": true, 00:13:32.321 "data_offset": 2048, 00:13:32.321 "data_size": 63488 00:13:32.321 } 00:13:32.321 ] 00:13:32.321 }' 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76235 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76235 ']' 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76235 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76235 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.321 killing process with pid 76235 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76235' 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76235 00:13:32.321 Received shutdown signal, test time was about 60.000000 seconds 00:13:32.321 00:13:32.321 Latency(us) 00:13:32.321 [2024-12-06T18:10:44.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.321 [2024-12-06T18:10:44.489Z] =================================================================================================================== 00:13:32.321 [2024-12-06T18:10:44.489Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:32.321 18:10:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76235 00:13:32.321 [2024-12-06 18:10:44.472882] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.321 [2024-12-06 18:10:44.473035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.321 [2024-12-06 18:10:44.473117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.321 [2024-12-06 18:10:44.473136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:32.890 [2024-12-06 18:10:44.839897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.352 18:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:34.352 00:13:34.352 real 0m24.843s 00:13:34.352 user 0m30.339s 00:13:34.352 sys 0m3.668s 00:13:34.352 18:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.352 18:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.352 ************************************ 00:13:34.352 END TEST raid_rebuild_test_sb 00:13:34.352 ************************************ 00:13:34.352 18:10:46 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:34.352 18:10:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:34.352 18:10:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.352 18:10:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.352 ************************************ 00:13:34.352 START TEST raid_rebuild_test_io 00:13:34.352 ************************************ 00:13:34.352 18:10:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:34.352 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76976 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76976 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76976 ']' 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.353 18:10:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:34.353 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.353 Zero copy mechanism will not be used. 00:13:34.353 [2024-12-06 18:10:46.344698] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:13:34.353 [2024-12-06 18:10:46.344843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76976 ] 00:13:34.611 [2024-12-06 18:10:46.524813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.611 [2024-12-06 18:10:46.662300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.870 [2024-12-06 18:10:46.926691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.870 [2024-12-06 18:10:46.926777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.128 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.128 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:35.128 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.128 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:35.128 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.128 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.386 BaseBdev1_malloc 00:13:35.386 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.386 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.386 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.386 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.386 [2024-12-06 18:10:47.313931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.386 [2024-12-06 18:10:47.314018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.386 [2024-12-06 18:10:47.314049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:35.386 [2024-12-06 18:10:47.314075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.386 [2024-12-06 18:10:47.316646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.386 [2024-12-06 18:10:47.316703] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.386 BaseBdev1 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.387 BaseBdev2_malloc 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.387 [2024-12-06 18:10:47.376330] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:35.387 [2024-12-06 18:10:47.376420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.387 [2024-12-06 18:10:47.376455] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.387 [2024-12-06 18:10:47.376469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.387 [2024-12-06 18:10:47.379010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.387 [2024-12-06 18:10:47.379077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.387 BaseBdev2 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.387 spare_malloc 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.387 spare_delay 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.387 [2024-12-06 18:10:47.461866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.387 [2024-12-06 18:10:47.461952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.387 [2024-12-06 18:10:47.461980] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:35.387 [2024-12-06 18:10:47.461994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.387 [2024-12-06 18:10:47.464624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.387 [2024-12-06 18:10:47.464679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.387 spare 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.387 [2024-12-06 18:10:47.469898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.387 [2024-12-06 18:10:47.472051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.387 [2024-12-06 18:10:47.472187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.387 [2024-12-06 18:10:47.472211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:35.387 [2024-12-06 18:10:47.472537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:35.387 [2024-12-06 18:10:47.472751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.387 [2024-12-06 18:10:47.472771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:35.387 [2024-12-06 18:10:47.472974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.387 "name": "raid_bdev1", 00:13:35.387 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:35.387 "strip_size_kb": 0, 00:13:35.387 "state": "online", 00:13:35.387 "raid_level": "raid1", 00:13:35.387 "superblock": false, 00:13:35.387 "num_base_bdevs": 2, 00:13:35.387 "num_base_bdevs_discovered": 2, 00:13:35.387 "num_base_bdevs_operational": 2, 00:13:35.387 "base_bdevs_list": [ 00:13:35.387 { 00:13:35.387 "name": "BaseBdev1", 00:13:35.387 "uuid": "db9be88a-1d5a-569a-9f21-17c260a5987c", 00:13:35.387 "is_configured": true, 00:13:35.387 "data_offset": 0, 00:13:35.387 "data_size": 65536 00:13:35.387 }, 00:13:35.387 { 00:13:35.387 "name": "BaseBdev2", 00:13:35.387 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:35.387 "is_configured": true, 00:13:35.387 "data_offset": 0, 00:13:35.387 "data_size": 65536 00:13:35.387 } 00:13:35.387 ] 00:13:35.387 }' 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.387 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.954 [2024-12-06 18:10:47.909532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.954 [2024-12-06 18:10:47.981054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.954 18:10:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.954 18:10:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.954 18:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.954 "name": "raid_bdev1", 00:13:35.954 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:35.954 "strip_size_kb": 0, 00:13:35.954 "state": "online", 00:13:35.954 "raid_level": "raid1", 00:13:35.954 "superblock": false, 00:13:35.954 "num_base_bdevs": 2, 00:13:35.954 "num_base_bdevs_discovered": 1, 00:13:35.954 "num_base_bdevs_operational": 1, 00:13:35.954 "base_bdevs_list": [ 00:13:35.954 { 00:13:35.954 "name": null, 00:13:35.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.954 "is_configured": false, 00:13:35.954 "data_offset": 0, 00:13:35.954 "data_size": 65536 00:13:35.954 }, 00:13:35.954 { 00:13:35.954 "name": "BaseBdev2", 00:13:35.954 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:35.954 "is_configured": true, 00:13:35.954 "data_offset": 0, 00:13:35.954 "data_size": 65536 00:13:35.954 } 00:13:35.954 ] 00:13:35.954 }' 00:13:35.954 18:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.954 18:10:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.212 [2024-12-06 18:10:48.133481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:36.212 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:36.212 Zero copy mechanism will not be used. 00:13:36.212 Running I/O for 60 seconds... 00:13:36.470 18:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.470 18:10:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.470 18:10:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.470 [2024-12-06 18:10:48.451618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.470 18:10:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.470 18:10:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:36.470 [2024-12-06 18:10:48.524285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:36.470 [2024-12-06 18:10:48.526598] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.729 [2024-12-06 18:10:48.659842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.729 [2024-12-06 18:10:48.660551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.729 [2024-12-06 18:10:48.889083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.729 [2024-12-06 18:10:48.889456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:37.244 160.00 IOPS, 480.00 MiB/s [2024-12-06T18:10:49.412Z] [2024-12-06 18:10:49.247540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:37.244 [2024-12-06 18:10:49.255254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:37.502 [2024-12-06 18:10:49.476667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.502 "name": "raid_bdev1", 00:13:37.502 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:37.502 "strip_size_kb": 0, 00:13:37.502 "state": "online", 00:13:37.502 "raid_level": "raid1", 00:13:37.502 "superblock": false, 00:13:37.502 "num_base_bdevs": 2, 00:13:37.502 "num_base_bdevs_discovered": 2, 00:13:37.502 "num_base_bdevs_operational": 2, 00:13:37.502 "process": { 00:13:37.502 "type": "rebuild", 00:13:37.502 "target": "spare", 00:13:37.502 "progress": { 00:13:37.502 "blocks": 10240, 00:13:37.502 "percent": 15 00:13:37.502 } 00:13:37.502 }, 00:13:37.502 "base_bdevs_list": [ 00:13:37.502 { 00:13:37.502 "name": "spare", 00:13:37.502 "uuid": "4cd12b99-59ac-5f4b-a91c-32ce772165e6", 00:13:37.502 "is_configured": true, 00:13:37.502 "data_offset": 0, 00:13:37.502 "data_size": 65536 00:13:37.502 }, 00:13:37.502 { 00:13:37.502 "name": "BaseBdev2", 00:13:37.502 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:37.502 "is_configured": true, 00:13:37.502 "data_offset": 0, 00:13:37.502 "data_size": 65536 00:13:37.502 } 00:13:37.502 ] 00:13:37.502 }' 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.502 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.502 [2024-12-06 18:10:49.645674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.761 [2024-12-06 18:10:49.798785] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.761 [2024-12-06 18:10:49.801888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.761 [2024-12-06 18:10:49.801958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.761 [2024-12-06 18:10:49.801974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.761 [2024-12-06 18:10:49.855020] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.761 "name": "raid_bdev1", 00:13:37.761 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:37.761 "strip_size_kb": 0, 00:13:37.761 "state": "online", 00:13:37.761 "raid_level": "raid1", 00:13:37.761 "superblock": false, 00:13:37.761 "num_base_bdevs": 2, 00:13:37.761 "num_base_bdevs_discovered": 1, 00:13:37.761 "num_base_bdevs_operational": 1, 00:13:37.761 "base_bdevs_list": [ 00:13:37.761 { 00:13:37.761 "name": null, 00:13:37.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.761 "is_configured": false, 00:13:37.761 "data_offset": 0, 00:13:37.761 "data_size": 65536 00:13:37.761 }, 00:13:37.761 { 00:13:37.761 "name": "BaseBdev2", 00:13:37.761 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:37.761 "is_configured": true, 00:13:37.761 "data_offset": 0, 00:13:37.761 "data_size": 65536 00:13:37.761 } 00:13:37.761 ] 00:13:37.761 }' 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.761 18:10:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.275 138.00 IOPS, 414.00 MiB/s [2024-12-06T18:10:50.443Z] 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.275 "name": "raid_bdev1", 00:13:38.275 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:38.275 "strip_size_kb": 0, 00:13:38.275 "state": "online", 00:13:38.275 "raid_level": "raid1", 00:13:38.275 "superblock": false, 00:13:38.275 "num_base_bdevs": 2, 00:13:38.275 "num_base_bdevs_discovered": 1, 00:13:38.275 "num_base_bdevs_operational": 1, 00:13:38.275 "base_bdevs_list": [ 00:13:38.275 { 00:13:38.275 "name": null, 00:13:38.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.275 "is_configured": false, 00:13:38.275 "data_offset": 0, 00:13:38.275 "data_size": 65536 00:13:38.275 }, 00:13:38.275 { 00:13:38.275 "name": "BaseBdev2", 00:13:38.275 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:38.275 "is_configured": true, 00:13:38.275 "data_offset": 0, 00:13:38.275 "data_size": 65536 00:13:38.275 } 00:13:38.275 ] 00:13:38.275 }' 00:13:38.275 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.533 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.533 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.533 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.533 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.533 18:10:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.533 18:10:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.533 [2024-12-06 18:10:50.507454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.533 18:10:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.533 18:10:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:38.533 [2024-12-06 18:10:50.565298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:38.533 [2024-12-06 18:10:50.567570] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.533 [2024-12-06 18:10:50.685687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.533 [2024-12-06 18:10:50.686341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.792 [2024-12-06 18:10:50.821483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.792 [2024-12-06 18:10:50.821845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:39.049 153.33 IOPS, 460.00 MiB/s [2024-12-06T18:10:51.217Z] [2024-12-06 18:10:51.177775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.616 "name": "raid_bdev1", 00:13:39.616 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:39.616 "strip_size_kb": 0, 00:13:39.616 "state": "online", 00:13:39.616 "raid_level": "raid1", 00:13:39.616 "superblock": false, 00:13:39.616 "num_base_bdevs": 2, 00:13:39.616 "num_base_bdevs_discovered": 2, 00:13:39.616 "num_base_bdevs_operational": 2, 00:13:39.616 "process": { 00:13:39.616 "type": "rebuild", 00:13:39.616 "target": "spare", 00:13:39.616 "progress": { 00:13:39.616 "blocks": 12288, 00:13:39.616 "percent": 18 00:13:39.616 } 00:13:39.616 }, 00:13:39.616 "base_bdevs_list": [ 00:13:39.616 { 00:13:39.616 "name": "spare", 00:13:39.616 "uuid": "4cd12b99-59ac-5f4b-a91c-32ce772165e6", 00:13:39.616 "is_configured": true, 00:13:39.616 "data_offset": 0, 00:13:39.616 "data_size": 65536 00:13:39.616 }, 00:13:39.616 { 00:13:39.616 "name": "BaseBdev2", 00:13:39.616 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:39.616 "is_configured": true, 00:13:39.616 "data_offset": 0, 00:13:39.616 "data_size": 65536 00:13:39.616 } 00:13:39.616 ] 00:13:39.616 }' 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=425 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.616 "name": "raid_bdev1", 00:13:39.616 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:39.616 "strip_size_kb": 0, 00:13:39.616 "state": "online", 00:13:39.616 "raid_level": "raid1", 00:13:39.616 "superblock": false, 00:13:39.616 "num_base_bdevs": 2, 00:13:39.616 "num_base_bdevs_discovered": 2, 00:13:39.616 "num_base_bdevs_operational": 2, 00:13:39.616 "process": { 00:13:39.616 "type": "rebuild", 00:13:39.616 "target": "spare", 00:13:39.616 "progress": { 00:13:39.616 "blocks": 14336, 00:13:39.616 "percent": 21 00:13:39.616 } 00:13:39.616 }, 00:13:39.616 "base_bdevs_list": [ 00:13:39.616 { 00:13:39.616 "name": "spare", 00:13:39.616 "uuid": "4cd12b99-59ac-5f4b-a91c-32ce772165e6", 00:13:39.616 "is_configured": true, 00:13:39.616 "data_offset": 0, 00:13:39.616 "data_size": 65536 00:13:39.616 }, 00:13:39.616 { 00:13:39.616 "name": "BaseBdev2", 00:13:39.616 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:39.616 "is_configured": true, 00:13:39.616 "data_offset": 0, 00:13:39.616 "data_size": 65536 00:13:39.616 } 00:13:39.616 ] 00:13:39.616 }' 00:13:39.616 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.616 [2024-12-06 18:10:51.766082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.617 [2024-12-06 18:10:51.766478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.875 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.875 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.875 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.875 18:10:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.135 [2024-12-06 18:10:52.130902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:40.135 137.25 IOPS, 411.75 MiB/s [2024-12-06T18:10:52.303Z] [2024-12-06 18:10:52.251201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:40.394 [2024-12-06 18:10:52.466237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:40.679 [2024-12-06 18:10:52.585225] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.679 18:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.943 18:10:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.943 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.943 "name": "raid_bdev1", 00:13:40.943 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:40.943 "strip_size_kb": 0, 00:13:40.943 "state": "online", 00:13:40.943 "raid_level": "raid1", 00:13:40.943 "superblock": false, 00:13:40.943 "num_base_bdevs": 2, 00:13:40.943 "num_base_bdevs_discovered": 2, 00:13:40.943 "num_base_bdevs_operational": 2, 00:13:40.943 "process": { 00:13:40.943 "type": "rebuild", 00:13:40.943 "target": "spare", 00:13:40.943 "progress": { 00:13:40.943 "blocks": 30720, 00:13:40.943 "percent": 46 00:13:40.943 } 00:13:40.943 }, 00:13:40.943 "base_bdevs_list": [ 00:13:40.943 { 00:13:40.943 "name": "spare", 00:13:40.943 "uuid": "4cd12b99-59ac-5f4b-a91c-32ce772165e6", 00:13:40.943 "is_configured": true, 00:13:40.943 "data_offset": 0, 00:13:40.943 "data_size": 65536 00:13:40.943 }, 00:13:40.943 { 00:13:40.943 "name": "BaseBdev2", 00:13:40.943 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:40.943 "is_configured": true, 00:13:40.943 "data_offset": 0, 00:13:40.943 "data_size": 65536 00:13:40.943 } 00:13:40.943 ] 00:13:40.943 }' 00:13:40.943 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.943 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.943 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.943 [2024-12-06 18:10:52.937566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:40.943 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.943 18:10:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.202 120.60 IOPS, 361.80 MiB/s [2024-12-06T18:10:53.370Z] [2024-12-06 18:10:53.267240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:41.460 [2024-12-06 18:10:53.375850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.027 "name": "raid_bdev1", 00:13:42.027 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:42.027 "strip_size_kb": 0, 00:13:42.027 "state": "online", 00:13:42.027 "raid_level": "raid1", 00:13:42.027 "superblock": false, 00:13:42.027 "num_base_bdevs": 2, 00:13:42.027 "num_base_bdevs_discovered": 2, 00:13:42.027 "num_base_bdevs_operational": 2, 00:13:42.027 "process": { 00:13:42.027 "type": "rebuild", 00:13:42.027 "target": "spare", 00:13:42.027 "progress": { 00:13:42.027 "blocks": 49152, 00:13:42.027 "percent": 75 00:13:42.027 } 00:13:42.027 }, 00:13:42.027 "base_bdevs_list": [ 00:13:42.027 { 00:13:42.027 "name": "spare", 00:13:42.027 "uuid": "4cd12b99-59ac-5f4b-a91c-32ce772165e6", 00:13:42.027 "is_configured": true, 00:13:42.027 "data_offset": 0, 00:13:42.027 "data_size": 65536 00:13:42.027 }, 00:13:42.027 { 00:13:42.027 "name": "BaseBdev2", 00:13:42.027 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:42.027 "is_configured": true, 00:13:42.027 "data_offset": 0, 00:13:42.027 "data_size": 65536 00:13:42.027 } 00:13:42.027 ] 00:13:42.027 }' 00:13:42.027 18:10:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.027 [2024-12-06 18:10:54.027270] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:42.027 18:10:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.027 18:10:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.027 18:10:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.027 18:10:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.595 108.00 IOPS, 324.00 MiB/s [2024-12-06T18:10:54.763Z] [2024-12-06 18:10:54.563658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:42.871 [2024-12-06 18:10:54.869936] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.871 [2024-12-06 18:10:54.897058] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.871 [2024-12-06 18:10:54.899734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.129 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.129 97.86 IOPS, 293.57 MiB/s [2024-12-06T18:10:55.297Z] 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.129 "name": "raid_bdev1", 00:13:43.129 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:43.129 "strip_size_kb": 0, 00:13:43.129 "state": "online", 00:13:43.129 "raid_level": "raid1", 00:13:43.129 "superblock": false, 00:13:43.129 "num_base_bdevs": 2, 00:13:43.129 "num_base_bdevs_discovered": 2, 00:13:43.129 "num_base_bdevs_operational": 2, 00:13:43.129 "base_bdevs_list": [ 00:13:43.129 { 00:13:43.129 "name": "spare", 00:13:43.129 "uuid": "4cd12b99-59ac-5f4b-a91c-32ce772165e6", 00:13:43.129 "is_configured": true, 00:13:43.129 "data_offset": 0, 00:13:43.129 "data_size": 65536 00:13:43.129 }, 00:13:43.129 { 00:13:43.129 "name": "BaseBdev2", 00:13:43.129 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:43.129 "is_configured": true, 00:13:43.129 "data_offset": 0, 00:13:43.129 "data_size": 65536 00:13:43.129 } 00:13:43.129 ] 00:13:43.130 }' 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.130 "name": "raid_bdev1", 00:13:43.130 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:43.130 "strip_size_kb": 0, 00:13:43.130 "state": "online", 00:13:43.130 "raid_level": "raid1", 00:13:43.130 "superblock": false, 00:13:43.130 "num_base_bdevs": 2, 00:13:43.130 "num_base_bdevs_discovered": 2, 00:13:43.130 "num_base_bdevs_operational": 2, 00:13:43.130 "base_bdevs_list": [ 00:13:43.130 { 00:13:43.130 "name": "spare", 00:13:43.130 "uuid": "4cd12b99-59ac-5f4b-a91c-32ce772165e6", 00:13:43.130 "is_configured": true, 00:13:43.130 "data_offset": 0, 00:13:43.130 "data_size": 65536 00:13:43.130 }, 00:13:43.130 { 00:13:43.130 "name": "BaseBdev2", 00:13:43.130 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:43.130 "is_configured": true, 00:13:43.130 "data_offset": 0, 00:13:43.130 "data_size": 65536 00:13:43.130 } 00:13:43.130 ] 00:13:43.130 }' 00:13:43.130 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.387 "name": "raid_bdev1", 00:13:43.387 "uuid": "666bd5de-e76b-4dfe-8043-4e743e8cebdb", 00:13:43.387 "strip_size_kb": 0, 00:13:43.387 "state": "online", 00:13:43.387 "raid_level": "raid1", 00:13:43.387 "superblock": false, 00:13:43.387 "num_base_bdevs": 2, 00:13:43.387 "num_base_bdevs_discovered": 2, 00:13:43.387 "num_base_bdevs_operational": 2, 00:13:43.387 "base_bdevs_list": [ 00:13:43.387 { 00:13:43.387 "name": "spare", 00:13:43.387 "uuid": "4cd12b99-59ac-5f4b-a91c-32ce772165e6", 00:13:43.387 "is_configured": true, 00:13:43.387 "data_offset": 0, 00:13:43.387 "data_size": 65536 00:13:43.387 }, 00:13:43.387 { 00:13:43.387 "name": "BaseBdev2", 00:13:43.387 "uuid": "e91d4a16-190a-58cb-88ca-52485ca3bed0", 00:13:43.387 "is_configured": true, 00:13:43.387 "data_offset": 0, 00:13:43.387 "data_size": 65536 00:13:43.387 } 00:13:43.387 ] 00:13:43.387 }' 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.387 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.645 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.645 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.645 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.645 [2024-12-06 18:10:55.809892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.645 [2024-12-06 18:10:55.809939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.905 00:13:43.905 Latency(us) 00:13:43.905 [2024-12-06T18:10:56.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.905 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:43.905 raid_bdev1 : 7.76 92.60 277.79 0.00 0.00 15660.63 347.00 111268.11 00:13:43.905 [2024-12-06T18:10:56.073Z] =================================================================================================================== 00:13:43.905 [2024-12-06T18:10:56.073Z] Total : 92.60 277.79 0.00 0.00 15660.63 347.00 111268.11 00:13:43.905 [2024-12-06 18:10:55.913335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.905 [2024-12-06 18:10:55.913446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.905 [2024-12-06 18:10:55.913544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.905 [2024-12-06 18:10:55.913570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.905 { 00:13:43.905 "results": [ 00:13:43.905 { 00:13:43.905 "job": "raid_bdev1", 00:13:43.905 "core_mask": "0x1", 00:13:43.905 "workload": "randrw", 00:13:43.905 "percentage": 50, 00:13:43.905 "status": "finished", 00:13:43.905 "queue_depth": 2, 00:13:43.905 "io_size": 3145728, 00:13:43.905 "runtime": 7.764741, 00:13:43.905 "iops": 92.59806605268611, 00:13:43.905 "mibps": 277.7941981580583, 00:13:43.905 "io_failed": 0, 00:13:43.905 "io_timeout": 0, 00:13:43.905 "avg_latency_us": 15660.633130682474, 00:13:43.905 "min_latency_us": 346.99737991266375, 00:13:43.905 "max_latency_us": 111268.10829694323 00:13:43.905 } 00:13:43.905 ], 00:13:43.905 "core_count": 1 00:13:43.905 } 00:13:43.905 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.905 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.905 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.905 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.906 18:10:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:44.164 /dev/nbd0 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.164 1+0 records in 00:13:44.164 1+0 records out 00:13:44.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318725 s, 12.9 MB/s 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.164 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:44.423 /dev/nbd1 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.423 1+0 records in 00:13:44.423 1+0 records out 00:13:44.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479515 s, 8.5 MB/s 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.423 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.681 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:44.681 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:44.681 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.681 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:44.681 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.681 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.681 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.681 18:10:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.939 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.939 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.939 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.939 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.939 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.939 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.196 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:45.196 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.196 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:45.196 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.196 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:45.196 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.196 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:45.196 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.196 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76976 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76976 ']' 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76976 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76976 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.454 killing process with pid 76976 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76976' 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76976 00:13:45.454 Received shutdown signal, test time was about 9.288889 seconds 00:13:45.454 00:13:45.454 Latency(us) 00:13:45.454 [2024-12-06T18:10:57.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.454 [2024-12-06T18:10:57.622Z] =================================================================================================================== 00:13:45.454 [2024-12-06T18:10:57.622Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:45.454 18:10:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76976 00:13:45.454 [2024-12-06 18:10:57.406889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.712 [2024-12-06 18:10:57.684250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:47.143 00:13:47.143 real 0m12.869s 00:13:47.143 user 0m16.349s 00:13:47.143 sys 0m1.394s 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.143 ************************************ 00:13:47.143 END TEST raid_rebuild_test_io 00:13:47.143 ************************************ 00:13:47.143 18:10:59 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:47.143 18:10:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:47.143 18:10:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.143 18:10:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.143 ************************************ 00:13:47.143 START TEST raid_rebuild_test_sb_io 00:13:47.143 ************************************ 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77359 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77359 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77359 ']' 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.143 18:10:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.143 [2024-12-06 18:10:59.268136] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:13:47.143 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:47.144 Zero copy mechanism will not be used. 00:13:47.144 [2024-12-06 18:10:59.268703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77359 ] 00:13:47.403 [2024-12-06 18:10:59.450691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.661 [2024-12-06 18:10:59.585006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.661 [2024-12-06 18:10:59.800475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.661 [2024-12-06 18:10:59.800526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.228 BaseBdev1_malloc 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.228 [2024-12-06 18:11:00.248503] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:48.228 [2024-12-06 18:11:00.248600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.228 [2024-12-06 18:11:00.248628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:48.228 [2024-12-06 18:11:00.248642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.228 [2024-12-06 18:11:00.251223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.228 [2024-12-06 18:11:00.251276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:48.228 BaseBdev1 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.228 BaseBdev2_malloc 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.228 [2024-12-06 18:11:00.303414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:48.228 [2024-12-06 18:11:00.303528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.228 [2024-12-06 18:11:00.303559] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:48.228 [2024-12-06 18:11:00.303573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.228 [2024-12-06 18:11:00.306184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.228 [2024-12-06 18:11:00.306242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:48.228 BaseBdev2 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.228 spare_malloc 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.228 spare_delay 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.228 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.228 [2024-12-06 18:11:00.389369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.228 [2024-12-06 18:11:00.389464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.228 [2024-12-06 18:11:00.389493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:48.228 [2024-12-06 18:11:00.389508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.228 [2024-12-06 18:11:00.392144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.229 [2024-12-06 18:11:00.392196] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.229 spare 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.487 [2024-12-06 18:11:00.401413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.487 [2024-12-06 18:11:00.403651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.487 [2024-12-06 18:11:00.403884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:48.487 [2024-12-06 18:11:00.403908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:48.487 [2024-12-06 18:11:00.404253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:48.487 [2024-12-06 18:11:00.404473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:48.487 [2024-12-06 18:11:00.404491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:48.487 [2024-12-06 18:11:00.404727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.487 "name": "raid_bdev1", 00:13:48.487 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:48.487 "strip_size_kb": 0, 00:13:48.487 "state": "online", 00:13:48.487 "raid_level": "raid1", 00:13:48.487 "superblock": true, 00:13:48.487 "num_base_bdevs": 2, 00:13:48.487 "num_base_bdevs_discovered": 2, 00:13:48.487 "num_base_bdevs_operational": 2, 00:13:48.487 "base_bdevs_list": [ 00:13:48.487 { 00:13:48.487 "name": "BaseBdev1", 00:13:48.487 "uuid": "3e8832a7-fad0-57b8-9d0b-bf04803647d1", 00:13:48.487 "is_configured": true, 00:13:48.487 "data_offset": 2048, 00:13:48.487 "data_size": 63488 00:13:48.487 }, 00:13:48.487 { 00:13:48.487 "name": "BaseBdev2", 00:13:48.487 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:48.487 "is_configured": true, 00:13:48.487 "data_offset": 2048, 00:13:48.487 "data_size": 63488 00:13:48.487 } 00:13:48.487 ] 00:13:48.487 }' 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.487 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.745 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.745 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.745 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:49.004 [2024-12-06 18:11:00.912871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:49.004 [2024-12-06 18:11:00.996377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.004 18:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.004 "name": "raid_bdev1", 00:13:49.004 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:49.004 "strip_size_kb": 0, 00:13:49.004 "state": "online", 00:13:49.004 "raid_level": "raid1", 00:13:49.004 "superblock": true, 00:13:49.004 "num_base_bdevs": 2, 00:13:49.004 "num_base_bdevs_discovered": 1, 00:13:49.004 "num_base_bdevs_operational": 1, 00:13:49.004 "base_bdevs_list": [ 00:13:49.004 { 00:13:49.004 "name": null, 00:13:49.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.004 "is_configured": false, 00:13:49.004 "data_offset": 0, 00:13:49.004 "data_size": 63488 00:13:49.004 }, 00:13:49.004 { 00:13:49.004 "name": "BaseBdev2", 00:13:49.004 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:49.004 "is_configured": true, 00:13:49.004 "data_offset": 2048, 00:13:49.004 "data_size": 63488 00:13:49.004 } 00:13:49.004 ] 00:13:49.004 }' 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.004 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.004 [2024-12-06 18:11:01.109738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:49.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:49.004 Zero copy mechanism will not be used. 00:13:49.004 Running I/O for 60 seconds... 00:13:49.571 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:49.571 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.571 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 [2024-12-06 18:11:01.473663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.571 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.571 18:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:49.571 [2024-12-06 18:11:01.533220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:49.571 [2024-12-06 18:11:01.536424] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.571 [2024-12-06 18:11:01.674765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:49.829 [2024-12-06 18:11:01.901898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.829 [2024-12-06 18:11:01.902290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:50.087 140.00 IOPS, 420.00 MiB/s [2024-12-06T18:11:02.255Z] [2024-12-06 18:11:02.239677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:50.345 [2024-12-06 18:11:02.482250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.602 "name": "raid_bdev1", 00:13:50.602 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:50.602 "strip_size_kb": 0, 00:13:50.602 "state": "online", 00:13:50.602 "raid_level": "raid1", 00:13:50.602 "superblock": true, 00:13:50.602 "num_base_bdevs": 2, 00:13:50.602 "num_base_bdevs_discovered": 2, 00:13:50.602 "num_base_bdevs_operational": 2, 00:13:50.602 "process": { 00:13:50.602 "type": "rebuild", 00:13:50.602 "target": "spare", 00:13:50.602 "progress": { 00:13:50.602 "blocks": 14336, 00:13:50.602 "percent": 22 00:13:50.602 } 00:13:50.602 }, 00:13:50.602 "base_bdevs_list": [ 00:13:50.602 { 00:13:50.602 "name": "spare", 00:13:50.602 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:50.602 "is_configured": true, 00:13:50.602 "data_offset": 2048, 00:13:50.602 "data_size": 63488 00:13:50.602 }, 00:13:50.602 { 00:13:50.602 "name": "BaseBdev2", 00:13:50.602 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:50.602 "is_configured": true, 00:13:50.602 "data_offset": 2048, 00:13:50.602 "data_size": 63488 00:13:50.602 } 00:13:50.602 ] 00:13:50.602 }' 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.602 [2024-12-06 18:11:02.591783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.602 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.602 [2024-12-06 18:11:02.663511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.602 [2024-12-06 18:11:02.694703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:50.602 [2024-12-06 18:11:02.703418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:50.602 [2024-12-06 18:11:02.711427] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:50.602 [2024-12-06 18:11:02.722033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.602 [2024-12-06 18:11:02.722131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.602 [2024-12-06 18:11:02.722151] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.859 [2024-12-06 18:11:02.774997] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.859 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.859 "name": "raid_bdev1", 00:13:50.859 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:50.859 "strip_size_kb": 0, 00:13:50.859 "state": "online", 00:13:50.859 "raid_level": "raid1", 00:13:50.859 "superblock": true, 00:13:50.859 "num_base_bdevs": 2, 00:13:50.859 "num_base_bdevs_discovered": 1, 00:13:50.859 "num_base_bdevs_operational": 1, 00:13:50.859 "base_bdevs_list": [ 00:13:50.859 { 00:13:50.859 "name": null, 00:13:50.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.859 "is_configured": false, 00:13:50.859 "data_offset": 0, 00:13:50.859 "data_size": 63488 00:13:50.859 }, 00:13:50.860 { 00:13:50.860 "name": "BaseBdev2", 00:13:50.860 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:50.860 "is_configured": true, 00:13:50.860 "data_offset": 2048, 00:13:50.860 "data_size": 63488 00:13:50.860 } 00:13:50.860 ] 00:13:50.860 }' 00:13:50.860 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.860 18:11:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.118 126.50 IOPS, 379.50 MiB/s [2024-12-06T18:11:03.286Z] 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.118 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.118 "name": "raid_bdev1", 00:13:51.118 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:51.118 "strip_size_kb": 0, 00:13:51.118 "state": "online", 00:13:51.118 "raid_level": "raid1", 00:13:51.118 "superblock": true, 00:13:51.118 "num_base_bdevs": 2, 00:13:51.118 "num_base_bdevs_discovered": 1, 00:13:51.118 "num_base_bdevs_operational": 1, 00:13:51.118 "base_bdevs_list": [ 00:13:51.118 { 00:13:51.118 "name": null, 00:13:51.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.118 "is_configured": false, 00:13:51.118 "data_offset": 0, 00:13:51.118 "data_size": 63488 00:13:51.118 }, 00:13:51.118 { 00:13:51.118 "name": "BaseBdev2", 00:13:51.119 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:51.119 "is_configured": true, 00:13:51.119 "data_offset": 2048, 00:13:51.119 "data_size": 63488 00:13:51.119 } 00:13:51.119 ] 00:13:51.119 }' 00:13:51.119 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.378 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.378 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.378 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.378 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.378 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.378 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.378 [2024-12-06 18:11:03.373265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.378 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.378 18:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:51.378 [2024-12-06 18:11:03.431799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:51.378 [2024-12-06 18:11:03.434126] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:51.635 [2024-12-06 18:11:03.548669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:51.635 [2024-12-06 18:11:03.549336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:51.893 [2024-12-06 18:11:03.816228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:51.893 [2024-12-06 18:11:03.816642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:52.443 131.67 IOPS, 395.00 MiB/s [2024-12-06T18:11:04.611Z] 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.443 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.443 "name": "raid_bdev1", 00:13:52.443 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:52.443 "strip_size_kb": 0, 00:13:52.443 "state": "online", 00:13:52.443 "raid_level": "raid1", 00:13:52.443 "superblock": true, 00:13:52.443 "num_base_bdevs": 2, 00:13:52.443 "num_base_bdevs_discovered": 2, 00:13:52.443 "num_base_bdevs_operational": 2, 00:13:52.443 "process": { 00:13:52.443 "type": "rebuild", 00:13:52.443 "target": "spare", 00:13:52.443 "progress": { 00:13:52.443 "blocks": 12288, 00:13:52.443 "percent": 19 00:13:52.444 } 00:13:52.444 }, 00:13:52.444 "base_bdevs_list": [ 00:13:52.444 { 00:13:52.444 "name": "spare", 00:13:52.444 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:52.444 "is_configured": true, 00:13:52.444 "data_offset": 2048, 00:13:52.444 "data_size": 63488 00:13:52.444 }, 00:13:52.444 { 00:13:52.444 "name": "BaseBdev2", 00:13:52.444 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:52.444 "is_configured": true, 00:13:52.444 "data_offset": 2048, 00:13:52.444 "data_size": 63488 00:13:52.444 } 00:13:52.444 ] 00:13:52.444 }' 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:52.444 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=438 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.444 [2024-12-06 18:11:04.541112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:52.444 [2024-12-06 18:11:04.541780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.444 "name": "raid_bdev1", 00:13:52.444 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:52.444 "strip_size_kb": 0, 00:13:52.444 "state": "online", 00:13:52.444 "raid_level": "raid1", 00:13:52.444 "superblock": true, 00:13:52.444 "num_base_bdevs": 2, 00:13:52.444 "num_base_bdevs_discovered": 2, 00:13:52.444 "num_base_bdevs_operational": 2, 00:13:52.444 "process": { 00:13:52.444 "type": "rebuild", 00:13:52.444 "target": "spare", 00:13:52.444 "progress": { 00:13:52.444 "blocks": 12288, 00:13:52.444 "percent": 19 00:13:52.444 } 00:13:52.444 }, 00:13:52.444 "base_bdevs_list": [ 00:13:52.444 { 00:13:52.444 "name": "spare", 00:13:52.444 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:52.444 "is_configured": true, 00:13:52.444 "data_offset": 2048, 00:13:52.444 "data_size": 63488 00:13:52.444 }, 00:13:52.444 { 00:13:52.444 "name": "BaseBdev2", 00:13:52.444 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:52.444 "is_configured": true, 00:13:52.444 "data_offset": 2048, 00:13:52.444 "data_size": 63488 00:13:52.444 } 00:13:52.444 ] 00:13:52.444 }' 00:13:52.444 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.703 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.703 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.703 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.703 18:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.703 [2024-12-06 18:11:04.762922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:52.703 [2024-12-06 18:11:04.763351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:53.529 111.50 IOPS, 334.50 MiB/s [2024-12-06T18:11:05.697Z] [2024-12-06 18:11:05.494118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.529 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.789 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.789 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.789 "name": "raid_bdev1", 00:13:53.789 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:53.789 "strip_size_kb": 0, 00:13:53.789 "state": "online", 00:13:53.789 "raid_level": "raid1", 00:13:53.789 "superblock": true, 00:13:53.789 "num_base_bdevs": 2, 00:13:53.789 "num_base_bdevs_discovered": 2, 00:13:53.789 "num_base_bdevs_operational": 2, 00:13:53.789 "process": { 00:13:53.789 "type": "rebuild", 00:13:53.789 "target": "spare", 00:13:53.789 "progress": { 00:13:53.789 "blocks": 28672, 00:13:53.789 "percent": 45 00:13:53.789 } 00:13:53.789 }, 00:13:53.789 "base_bdevs_list": [ 00:13:53.789 { 00:13:53.789 "name": "spare", 00:13:53.789 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:53.789 "is_configured": true, 00:13:53.789 "data_offset": 2048, 00:13:53.789 "data_size": 63488 00:13:53.789 }, 00:13:53.789 { 00:13:53.789 "name": "BaseBdev2", 00:13:53.789 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:53.789 "is_configured": true, 00:13:53.789 "data_offset": 2048, 00:13:53.789 "data_size": 63488 00:13:53.789 } 00:13:53.789 ] 00:13:53.789 }' 00:13:53.789 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.789 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.789 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.789 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.789 18:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.306 101.00 IOPS, 303.00 MiB/s [2024-12-06T18:11:06.474Z] [2024-12-06 18:11:06.353381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.873 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.873 "name": "raid_bdev1", 00:13:54.873 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:54.873 "strip_size_kb": 0, 00:13:54.873 "state": "online", 00:13:54.873 "raid_level": "raid1", 00:13:54.873 "superblock": true, 00:13:54.873 "num_base_bdevs": 2, 00:13:54.873 "num_base_bdevs_discovered": 2, 00:13:54.873 "num_base_bdevs_operational": 2, 00:13:54.873 "process": { 00:13:54.873 "type": "rebuild", 00:13:54.873 "target": "spare", 00:13:54.873 "progress": { 00:13:54.873 "blocks": 47104, 00:13:54.874 "percent": 74 00:13:54.874 } 00:13:54.874 }, 00:13:54.874 "base_bdevs_list": [ 00:13:54.874 { 00:13:54.874 "name": "spare", 00:13:54.874 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:54.874 "is_configured": true, 00:13:54.874 "data_offset": 2048, 00:13:54.874 "data_size": 63488 00:13:54.874 }, 00:13:54.874 { 00:13:54.874 "name": "BaseBdev2", 00:13:54.874 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:54.874 "is_configured": true, 00:13:54.874 "data_offset": 2048, 00:13:54.874 "data_size": 63488 00:13:54.874 } 00:13:54.874 ] 00:13:54.874 }' 00:13:54.874 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.874 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.874 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.874 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.874 18:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.702 88.00 IOPS, 264.00 MiB/s [2024-12-06T18:11:07.870Z] [2024-12-06 18:11:07.680858] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:55.702 [2024-12-06 18:11:07.783144] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:55.702 [2024-12-06 18:11:07.785860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.961 18:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.962 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.962 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.962 "name": "raid_bdev1", 00:13:55.962 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:55.962 "strip_size_kb": 0, 00:13:55.962 "state": "online", 00:13:55.962 "raid_level": "raid1", 00:13:55.962 "superblock": true, 00:13:55.962 "num_base_bdevs": 2, 00:13:55.962 "num_base_bdevs_discovered": 2, 00:13:55.962 "num_base_bdevs_operational": 2, 00:13:55.962 "base_bdevs_list": [ 00:13:55.962 { 00:13:55.962 "name": "spare", 00:13:55.962 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:55.962 "is_configured": true, 00:13:55.962 "data_offset": 2048, 00:13:55.962 "data_size": 63488 00:13:55.962 }, 00:13:55.962 { 00:13:55.962 "name": "BaseBdev2", 00:13:55.962 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:55.962 "is_configured": true, 00:13:55.962 "data_offset": 2048, 00:13:55.962 "data_size": 63488 00:13:55.962 } 00:13:55.962 ] 00:13:55.962 }' 00:13:55.962 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.962 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:55.962 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.220 81.00 IOPS, 243.00 MiB/s [2024-12-06T18:11:08.388Z] 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.220 "name": "raid_bdev1", 00:13:56.220 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:56.220 "strip_size_kb": 0, 00:13:56.220 "state": "online", 00:13:56.220 "raid_level": "raid1", 00:13:56.220 "superblock": true, 00:13:56.220 "num_base_bdevs": 2, 00:13:56.220 "num_base_bdevs_discovered": 2, 00:13:56.220 "num_base_bdevs_operational": 2, 00:13:56.220 "base_bdevs_list": [ 00:13:56.220 { 00:13:56.220 "name": "spare", 00:13:56.220 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:56.220 "is_configured": true, 00:13:56.220 "data_offset": 2048, 00:13:56.220 "data_size": 63488 00:13:56.220 }, 00:13:56.220 { 00:13:56.220 "name": "BaseBdev2", 00:13:56.220 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:56.220 "is_configured": true, 00:13:56.220 "data_offset": 2048, 00:13:56.220 "data_size": 63488 00:13:56.220 } 00:13:56.220 ] 00:13:56.220 }' 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.220 "name": "raid_bdev1", 00:13:56.220 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:56.220 "strip_size_kb": 0, 00:13:56.220 "state": "online", 00:13:56.220 "raid_level": "raid1", 00:13:56.220 "superblock": true, 00:13:56.220 "num_base_bdevs": 2, 00:13:56.220 "num_base_bdevs_discovered": 2, 00:13:56.220 "num_base_bdevs_operational": 2, 00:13:56.220 "base_bdevs_list": [ 00:13:56.220 { 00:13:56.220 "name": "spare", 00:13:56.220 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:56.220 "is_configured": true, 00:13:56.220 "data_offset": 2048, 00:13:56.220 "data_size": 63488 00:13:56.220 }, 00:13:56.220 { 00:13:56.220 "name": "BaseBdev2", 00:13:56.220 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:56.220 "is_configured": true, 00:13:56.220 "data_offset": 2048, 00:13:56.220 "data_size": 63488 00:13:56.220 } 00:13:56.220 ] 00:13:56.220 }' 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.220 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.790 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.790 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.790 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.790 [2024-12-06 18:11:08.766437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.790 [2024-12-06 18:11:08.766487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.790 00:13:56.790 Latency(us) 00:13:56.790 [2024-12-06T18:11:08.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.791 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:56.791 raid_bdev1 : 7.77 76.84 230.53 0.00 0.00 18886.69 343.42 117220.72 00:13:56.791 [2024-12-06T18:11:08.959Z] =================================================================================================================== 00:13:56.791 [2024-12-06T18:11:08.959Z] Total : 76.84 230.53 0.00 0.00 18886.69 343.42 117220.72 00:13:56.791 [2024-12-06 18:11:08.891774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.791 [2024-12-06 18:11:08.891876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.791 [2024-12-06 18:11:08.891983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.791 [2024-12-06 18:11:08.891994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:56.791 { 00:13:56.791 "results": [ 00:13:56.791 { 00:13:56.791 "job": "raid_bdev1", 00:13:56.791 "core_mask": "0x1", 00:13:56.791 "workload": "randrw", 00:13:56.791 "percentage": 50, 00:13:56.791 "status": "finished", 00:13:56.791 "queue_depth": 2, 00:13:56.791 "io_size": 3145728, 00:13:56.791 "runtime": 7.769005, 00:13:56.791 "iops": 76.8438171940937, 00:13:56.791 "mibps": 230.5314515822811, 00:13:56.791 "io_failed": 0, 00:13:56.791 "io_timeout": 0, 00:13:56.791 "avg_latency_us": 18886.687668327082, 00:13:56.791 "min_latency_us": 343.42008733624453, 00:13:56.791 "max_latency_us": 117220.7231441048 00:13:56.791 } 00:13:56.791 ], 00:13:56.791 "core_count": 1 00:13:56.791 } 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.791 18:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:57.050 /dev/nbd0 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.456 1+0 records in 00:13:57.456 1+0 records out 00:13:57.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048329 s, 8.5 MB/s 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:57.456 /dev/nbd1 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.456 1+0 records in 00:13:57.456 1+0 records out 00:13:57.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507116 s, 8.1 MB/s 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.456 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:57.715 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:57.715 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.715 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:57.715 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.715 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:57.715 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.715 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.974 18:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.233 [2024-12-06 18:11:10.168496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:58.233 [2024-12-06 18:11:10.168557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.233 [2024-12-06 18:11:10.168586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:58.233 [2024-12-06 18:11:10.168595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.233 [2024-12-06 18:11:10.170848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.233 [2024-12-06 18:11:10.170886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:58.233 [2024-12-06 18:11:10.170988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:58.233 [2024-12-06 18:11:10.171037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.233 [2024-12-06 18:11:10.171201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.233 spare 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.233 [2024-12-06 18:11:10.271123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:58.233 [2024-12-06 18:11:10.271165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:58.233 [2024-12-06 18:11:10.271523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:58.233 [2024-12-06 18:11:10.271776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:58.233 [2024-12-06 18:11:10.271796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:58.233 [2024-12-06 18:11:10.272020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.233 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.234 "name": "raid_bdev1", 00:13:58.234 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:58.234 "strip_size_kb": 0, 00:13:58.234 "state": "online", 00:13:58.234 "raid_level": "raid1", 00:13:58.234 "superblock": true, 00:13:58.234 "num_base_bdevs": 2, 00:13:58.234 "num_base_bdevs_discovered": 2, 00:13:58.234 "num_base_bdevs_operational": 2, 00:13:58.234 "base_bdevs_list": [ 00:13:58.234 { 00:13:58.234 "name": "spare", 00:13:58.234 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:58.234 "is_configured": true, 00:13:58.234 "data_offset": 2048, 00:13:58.234 "data_size": 63488 00:13:58.234 }, 00:13:58.234 { 00:13:58.234 "name": "BaseBdev2", 00:13:58.234 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:58.234 "is_configured": true, 00:13:58.234 "data_offset": 2048, 00:13:58.234 "data_size": 63488 00:13:58.234 } 00:13:58.234 ] 00:13:58.234 }' 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.234 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.801 "name": "raid_bdev1", 00:13:58.801 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:58.801 "strip_size_kb": 0, 00:13:58.801 "state": "online", 00:13:58.801 "raid_level": "raid1", 00:13:58.801 "superblock": true, 00:13:58.801 "num_base_bdevs": 2, 00:13:58.801 "num_base_bdevs_discovered": 2, 00:13:58.801 "num_base_bdevs_operational": 2, 00:13:58.801 "base_bdevs_list": [ 00:13:58.801 { 00:13:58.801 "name": "spare", 00:13:58.801 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:13:58.801 "is_configured": true, 00:13:58.801 "data_offset": 2048, 00:13:58.801 "data_size": 63488 00:13:58.801 }, 00:13:58.801 { 00:13:58.801 "name": "BaseBdev2", 00:13:58.801 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:58.801 "is_configured": true, 00:13:58.801 "data_offset": 2048, 00:13:58.801 "data_size": 63488 00:13:58.801 } 00:13:58.801 ] 00:13:58.801 }' 00:13:58.801 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.802 [2024-12-06 18:11:10.855506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.802 "name": "raid_bdev1", 00:13:58.802 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:13:58.802 "strip_size_kb": 0, 00:13:58.802 "state": "online", 00:13:58.802 "raid_level": "raid1", 00:13:58.802 "superblock": true, 00:13:58.802 "num_base_bdevs": 2, 00:13:58.802 "num_base_bdevs_discovered": 1, 00:13:58.802 "num_base_bdevs_operational": 1, 00:13:58.802 "base_bdevs_list": [ 00:13:58.802 { 00:13:58.802 "name": null, 00:13:58.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.802 "is_configured": false, 00:13:58.802 "data_offset": 0, 00:13:58.802 "data_size": 63488 00:13:58.802 }, 00:13:58.802 { 00:13:58.802 "name": "BaseBdev2", 00:13:58.802 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:13:58.802 "is_configured": true, 00:13:58.802 "data_offset": 2048, 00:13:58.802 "data_size": 63488 00:13:58.802 } 00:13:58.802 ] 00:13:58.802 }' 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.802 18:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.369 18:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.369 18:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.369 18:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.369 [2024-12-06 18:11:11.334796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.369 [2024-12-06 18:11:11.335040] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:59.369 [2024-12-06 18:11:11.335084] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:59.369 [2024-12-06 18:11:11.335129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.369 [2024-12-06 18:11:11.355985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:59.369 18:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.369 18:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:59.369 [2024-12-06 18:11:11.358413] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.305 "name": "raid_bdev1", 00:14:00.305 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:14:00.305 "strip_size_kb": 0, 00:14:00.305 "state": "online", 00:14:00.305 "raid_level": "raid1", 00:14:00.305 "superblock": true, 00:14:00.305 "num_base_bdevs": 2, 00:14:00.305 "num_base_bdevs_discovered": 2, 00:14:00.305 "num_base_bdevs_operational": 2, 00:14:00.305 "process": { 00:14:00.305 "type": "rebuild", 00:14:00.305 "target": "spare", 00:14:00.305 "progress": { 00:14:00.305 "blocks": 20480, 00:14:00.305 "percent": 32 00:14:00.305 } 00:14:00.305 }, 00:14:00.305 "base_bdevs_list": [ 00:14:00.305 { 00:14:00.305 "name": "spare", 00:14:00.305 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:14:00.305 "is_configured": true, 00:14:00.305 "data_offset": 2048, 00:14:00.305 "data_size": 63488 00:14:00.305 }, 00:14:00.305 { 00:14:00.305 "name": "BaseBdev2", 00:14:00.305 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:14:00.305 "is_configured": true, 00:14:00.305 "data_offset": 2048, 00:14:00.305 "data_size": 63488 00:14:00.305 } 00:14:00.305 ] 00:14:00.305 }' 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.305 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.563 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.563 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:00.563 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.563 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.563 [2024-12-06 18:11:12.517456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.563 [2024-12-06 18:11:12.565219] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:00.563 [2024-12-06 18:11:12.565444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.563 [2024-12-06 18:11:12.565468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.563 [2024-12-06 18:11:12.565480] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:00.563 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.563 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:00.563 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.563 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.564 "name": "raid_bdev1", 00:14:00.564 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:14:00.564 "strip_size_kb": 0, 00:14:00.564 "state": "online", 00:14:00.564 "raid_level": "raid1", 00:14:00.564 "superblock": true, 00:14:00.564 "num_base_bdevs": 2, 00:14:00.564 "num_base_bdevs_discovered": 1, 00:14:00.564 "num_base_bdevs_operational": 1, 00:14:00.564 "base_bdevs_list": [ 00:14:00.564 { 00:14:00.564 "name": null, 00:14:00.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.564 "is_configured": false, 00:14:00.564 "data_offset": 0, 00:14:00.564 "data_size": 63488 00:14:00.564 }, 00:14:00.564 { 00:14:00.564 "name": "BaseBdev2", 00:14:00.564 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:14:00.564 "is_configured": true, 00:14:00.564 "data_offset": 2048, 00:14:00.564 "data_size": 63488 00:14:00.564 } 00:14:00.564 ] 00:14:00.564 }' 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.564 18:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.130 18:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:01.130 18:11:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.130 18:11:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.130 [2024-12-06 18:11:13.109091] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:01.130 [2024-12-06 18:11:13.109270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.130 [2024-12-06 18:11:13.109329] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:01.130 [2024-12-06 18:11:13.109367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.130 [2024-12-06 18:11:13.109973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.130 [2024-12-06 18:11:13.110055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:01.130 [2024-12-06 18:11:13.110223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:01.130 [2024-12-06 18:11:13.110279] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:01.130 [2024-12-06 18:11:13.110327] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:01.130 [2024-12-06 18:11:13.110383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.130 [2024-12-06 18:11:13.130129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:01.130 spare 00:14:01.130 18:11:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.130 18:11:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:01.130 [2024-12-06 18:11:13.132511] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.068 "name": "raid_bdev1", 00:14:02.068 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:14:02.068 "strip_size_kb": 0, 00:14:02.068 "state": "online", 00:14:02.068 "raid_level": "raid1", 00:14:02.068 "superblock": true, 00:14:02.068 "num_base_bdevs": 2, 00:14:02.068 "num_base_bdevs_discovered": 2, 00:14:02.068 "num_base_bdevs_operational": 2, 00:14:02.068 "process": { 00:14:02.068 "type": "rebuild", 00:14:02.068 "target": "spare", 00:14:02.068 "progress": { 00:14:02.068 "blocks": 20480, 00:14:02.068 "percent": 32 00:14:02.068 } 00:14:02.068 }, 00:14:02.068 "base_bdevs_list": [ 00:14:02.068 { 00:14:02.068 "name": "spare", 00:14:02.068 "uuid": "c02c1f05-c1af-5e44-ba99-b9052caab574", 00:14:02.068 "is_configured": true, 00:14:02.068 "data_offset": 2048, 00:14:02.068 "data_size": 63488 00:14:02.068 }, 00:14:02.068 { 00:14:02.068 "name": "BaseBdev2", 00:14:02.068 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:14:02.068 "is_configured": true, 00:14:02.068 "data_offset": 2048, 00:14:02.068 "data_size": 63488 00:14:02.068 } 00:14:02.068 ] 00:14:02.068 }' 00:14:02.068 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.328 [2024-12-06 18:11:14.300339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.328 [2024-12-06 18:11:14.339379] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.328 [2024-12-06 18:11:14.339488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.328 [2024-12-06 18:11:14.339520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.328 [2024-12-06 18:11:14.339546] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:02.328 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.329 "name": "raid_bdev1", 00:14:02.329 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:14:02.329 "strip_size_kb": 0, 00:14:02.329 "state": "online", 00:14:02.329 "raid_level": "raid1", 00:14:02.329 "superblock": true, 00:14:02.329 "num_base_bdevs": 2, 00:14:02.329 "num_base_bdevs_discovered": 1, 00:14:02.329 "num_base_bdevs_operational": 1, 00:14:02.329 "base_bdevs_list": [ 00:14:02.329 { 00:14:02.329 "name": null, 00:14:02.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.329 "is_configured": false, 00:14:02.329 "data_offset": 0, 00:14:02.329 "data_size": 63488 00:14:02.329 }, 00:14:02.329 { 00:14:02.329 "name": "BaseBdev2", 00:14:02.329 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:14:02.329 "is_configured": true, 00:14:02.329 "data_offset": 2048, 00:14:02.329 "data_size": 63488 00:14:02.329 } 00:14:02.329 ] 00:14:02.329 }' 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.329 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.897 "name": "raid_bdev1", 00:14:02.897 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:14:02.897 "strip_size_kb": 0, 00:14:02.897 "state": "online", 00:14:02.897 "raid_level": "raid1", 00:14:02.897 "superblock": true, 00:14:02.897 "num_base_bdevs": 2, 00:14:02.897 "num_base_bdevs_discovered": 1, 00:14:02.897 "num_base_bdevs_operational": 1, 00:14:02.897 "base_bdevs_list": [ 00:14:02.897 { 00:14:02.897 "name": null, 00:14:02.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.897 "is_configured": false, 00:14:02.897 "data_offset": 0, 00:14:02.897 "data_size": 63488 00:14:02.897 }, 00:14:02.897 { 00:14:02.897 "name": "BaseBdev2", 00:14:02.897 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:14:02.897 "is_configured": true, 00:14:02.897 "data_offset": 2048, 00:14:02.897 "data_size": 63488 00:14:02.897 } 00:14:02.897 ] 00:14:02.897 }' 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.897 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.898 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.898 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:02.898 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.898 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.898 [2024-12-06 18:11:14.990270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:02.898 [2024-12-06 18:11:14.990367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.898 [2024-12-06 18:11:14.990402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:02.898 [2024-12-06 18:11:14.990416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.898 [2024-12-06 18:11:14.990966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.898 [2024-12-06 18:11:14.990999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:02.898 [2024-12-06 18:11:14.991123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:02.898 [2024-12-06 18:11:14.991152] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:02.898 [2024-12-06 18:11:14.991165] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:02.898 [2024-12-06 18:11:14.991177] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:02.898 BaseBdev1 00:14:02.898 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.898 18:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:03.840 18:11:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:03.840 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.840 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.840 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.840 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.840 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:03.840 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.840 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.840 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.840 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.098 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.098 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.098 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.098 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.098 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.098 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.098 "name": "raid_bdev1", 00:14:04.098 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:14:04.098 "strip_size_kb": 0, 00:14:04.098 "state": "online", 00:14:04.098 "raid_level": "raid1", 00:14:04.098 "superblock": true, 00:14:04.098 "num_base_bdevs": 2, 00:14:04.098 "num_base_bdevs_discovered": 1, 00:14:04.098 "num_base_bdevs_operational": 1, 00:14:04.098 "base_bdevs_list": [ 00:14:04.098 { 00:14:04.098 "name": null, 00:14:04.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.098 "is_configured": false, 00:14:04.098 "data_offset": 0, 00:14:04.098 "data_size": 63488 00:14:04.098 }, 00:14:04.098 { 00:14:04.098 "name": "BaseBdev2", 00:14:04.098 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:14:04.098 "is_configured": true, 00:14:04.098 "data_offset": 2048, 00:14:04.098 "data_size": 63488 00:14:04.098 } 00:14:04.098 ] 00:14:04.098 }' 00:14:04.098 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.098 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.356 "name": "raid_bdev1", 00:14:04.356 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:14:04.356 "strip_size_kb": 0, 00:14:04.356 "state": "online", 00:14:04.356 "raid_level": "raid1", 00:14:04.356 "superblock": true, 00:14:04.356 "num_base_bdevs": 2, 00:14:04.356 "num_base_bdevs_discovered": 1, 00:14:04.356 "num_base_bdevs_operational": 1, 00:14:04.356 "base_bdevs_list": [ 00:14:04.356 { 00:14:04.356 "name": null, 00:14:04.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.356 "is_configured": false, 00:14:04.356 "data_offset": 0, 00:14:04.356 "data_size": 63488 00:14:04.356 }, 00:14:04.356 { 00:14:04.356 "name": "BaseBdev2", 00:14:04.356 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:14:04.356 "is_configured": true, 00:14:04.356 "data_offset": 2048, 00:14:04.356 "data_size": 63488 00:14:04.356 } 00:14:04.356 ] 00:14:04.356 }' 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.356 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.617 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.617 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.617 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.617 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.618 [2024-12-06 18:11:16.584160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.618 [2024-12-06 18:11:16.584373] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:04.618 [2024-12-06 18:11:16.584400] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:04.618 request: 00:14:04.618 { 00:14:04.618 "base_bdev": "BaseBdev1", 00:14:04.618 "raid_bdev": "raid_bdev1", 00:14:04.618 "method": "bdev_raid_add_base_bdev", 00:14:04.618 "req_id": 1 00:14:04.618 } 00:14:04.618 Got JSON-RPC error response 00:14:04.618 response: 00:14:04.618 { 00:14:04.618 "code": -22, 00:14:04.618 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:04.618 } 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.618 18:11:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.602 "name": "raid_bdev1", 00:14:05.602 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:14:05.602 "strip_size_kb": 0, 00:14:05.602 "state": "online", 00:14:05.602 "raid_level": "raid1", 00:14:05.602 "superblock": true, 00:14:05.602 "num_base_bdevs": 2, 00:14:05.602 "num_base_bdevs_discovered": 1, 00:14:05.602 "num_base_bdevs_operational": 1, 00:14:05.602 "base_bdevs_list": [ 00:14:05.602 { 00:14:05.602 "name": null, 00:14:05.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.602 "is_configured": false, 00:14:05.602 "data_offset": 0, 00:14:05.602 "data_size": 63488 00:14:05.602 }, 00:14:05.602 { 00:14:05.602 "name": "BaseBdev2", 00:14:05.602 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:14:05.602 "is_configured": true, 00:14:05.602 "data_offset": 2048, 00:14:05.602 "data_size": 63488 00:14:05.602 } 00:14:05.602 ] 00:14:05.602 }' 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.602 18:11:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.879 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.879 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.879 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.879 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.879 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.879 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.879 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.879 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.879 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.138 "name": "raid_bdev1", 00:14:06.138 "uuid": "4745d541-0bea-41c9-a744-4652dfc9faf8", 00:14:06.138 "strip_size_kb": 0, 00:14:06.138 "state": "online", 00:14:06.138 "raid_level": "raid1", 00:14:06.138 "superblock": true, 00:14:06.138 "num_base_bdevs": 2, 00:14:06.138 "num_base_bdevs_discovered": 1, 00:14:06.138 "num_base_bdevs_operational": 1, 00:14:06.138 "base_bdevs_list": [ 00:14:06.138 { 00:14:06.138 "name": null, 00:14:06.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.138 "is_configured": false, 00:14:06.138 "data_offset": 0, 00:14:06.138 "data_size": 63488 00:14:06.138 }, 00:14:06.138 { 00:14:06.138 "name": "BaseBdev2", 00:14:06.138 "uuid": "7fa0e7ef-98f1-5d58-8eab-9d9b0ba6fd93", 00:14:06.138 "is_configured": true, 00:14:06.138 "data_offset": 2048, 00:14:06.138 "data_size": 63488 00:14:06.138 } 00:14:06.138 ] 00:14:06.138 }' 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77359 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77359 ']' 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77359 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77359 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.138 killing process with pid 77359 00:14:06.138 Received shutdown signal, test time was about 17.141761 seconds 00:14:06.138 00:14:06.138 Latency(us) 00:14:06.138 [2024-12-06T18:11:18.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.138 [2024-12-06T18:11:18.306Z] =================================================================================================================== 00:14:06.138 [2024-12-06T18:11:18.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77359' 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77359 00:14:06.138 18:11:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77359 00:14:06.138 [2024-12-06 18:11:18.220891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.138 [2024-12-06 18:11:18.221053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.138 [2024-12-06 18:11:18.221197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.138 [2024-12-06 18:11:18.221218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:06.396 [2024-12-06 18:11:18.498996] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.780 18:11:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:07.780 00:14:07.780 real 0m20.762s 00:14:07.780 user 0m27.252s 00:14:07.780 sys 0m2.160s 00:14:07.780 18:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.780 ************************************ 00:14:07.780 END TEST raid_rebuild_test_sb_io 00:14:07.780 ************************************ 00:14:07.780 18:11:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.040 18:11:19 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:08.040 18:11:19 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:08.040 18:11:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:08.040 18:11:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.040 18:11:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.040 ************************************ 00:14:08.040 START TEST raid_rebuild_test 00:14:08.040 ************************************ 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:08.040 18:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78048 00:14:08.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78048 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78048 ']' 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.040 18:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.040 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:08.040 Zero copy mechanism will not be used. 00:14:08.040 [2024-12-06 18:11:20.142253] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:14:08.040 [2024-12-06 18:11:20.142509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78048 ] 00:14:08.299 [2024-12-06 18:11:20.332119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.558 [2024-12-06 18:11:20.467829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.558 [2024-12-06 18:11:20.708266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.558 [2024-12-06 18:11:20.708461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.127 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.127 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:09.127 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.127 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:09.127 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.127 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.127 BaseBdev1_malloc 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 [2024-12-06 18:11:21.132862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:09.128 [2024-12-06 18:11:21.132965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.128 [2024-12-06 18:11:21.132997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:09.128 [2024-12-06 18:11:21.133011] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.128 [2024-12-06 18:11:21.135665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.128 [2024-12-06 18:11:21.135808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.128 BaseBdev1 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 BaseBdev2_malloc 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 [2024-12-06 18:11:21.196947] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:09.128 [2024-12-06 18:11:21.197043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.128 [2024-12-06 18:11:21.197095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:09.128 [2024-12-06 18:11:21.197110] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.128 [2024-12-06 18:11:21.199774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.128 [2024-12-06 18:11:21.199838] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:09.128 BaseBdev2 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 BaseBdev3_malloc 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.128 [2024-12-06 18:11:21.271202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:09.128 [2024-12-06 18:11:21.271381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.128 [2024-12-06 18:11:21.271418] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:09.128 [2024-12-06 18:11:21.271432] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.128 [2024-12-06 18:11:21.274112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.128 [2024-12-06 18:11:21.274171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:09.128 BaseBdev3 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.128 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.398 BaseBdev4_malloc 00:14:09.398 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.398 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:09.398 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.398 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.398 [2024-12-06 18:11:21.334436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:09.398 [2024-12-06 18:11:21.334613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.398 [2024-12-06 18:11:21.334649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:09.398 [2024-12-06 18:11:21.334662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.399 [2024-12-06 18:11:21.337242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.399 [2024-12-06 18:11:21.337302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:09.399 BaseBdev4 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.399 spare_malloc 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.399 spare_delay 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.399 [2024-12-06 18:11:21.411785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:09.399 [2024-12-06 18:11:21.411886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.399 [2024-12-06 18:11:21.411914] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:09.399 [2024-12-06 18:11:21.411928] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.399 [2024-12-06 18:11:21.414613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.399 [2024-12-06 18:11:21.414671] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:09.399 spare 00:14:09.399 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.400 [2024-12-06 18:11:21.423819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.400 [2024-12-06 18:11:21.426079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.400 [2024-12-06 18:11:21.426172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.400 [2024-12-06 18:11:21.426237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:09.400 [2024-12-06 18:11:21.426356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:09.400 [2024-12-06 18:11:21.426372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:09.400 [2024-12-06 18:11:21.426725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:09.400 [2024-12-06 18:11:21.426951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:09.400 [2024-12-06 18:11:21.426966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:09.400 [2024-12-06 18:11:21.427210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.400 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.400 "name": "raid_bdev1", 00:14:09.400 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:09.400 "strip_size_kb": 0, 00:14:09.400 "state": "online", 00:14:09.400 "raid_level": "raid1", 00:14:09.400 "superblock": false, 00:14:09.400 "num_base_bdevs": 4, 00:14:09.400 "num_base_bdevs_discovered": 4, 00:14:09.401 "num_base_bdevs_operational": 4, 00:14:09.401 "base_bdevs_list": [ 00:14:09.401 { 00:14:09.401 "name": "BaseBdev1", 00:14:09.401 "uuid": "d84426d4-f0e4-50a6-b0c7-733ef308b971", 00:14:09.401 "is_configured": true, 00:14:09.401 "data_offset": 0, 00:14:09.401 "data_size": 65536 00:14:09.401 }, 00:14:09.401 { 00:14:09.401 "name": "BaseBdev2", 00:14:09.401 "uuid": "dcde736e-b88f-57a5-a045-fe8fa09a9736", 00:14:09.401 "is_configured": true, 00:14:09.401 "data_offset": 0, 00:14:09.401 "data_size": 65536 00:14:09.401 }, 00:14:09.401 { 00:14:09.401 "name": "BaseBdev3", 00:14:09.401 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:09.401 "is_configured": true, 00:14:09.401 "data_offset": 0, 00:14:09.401 "data_size": 65536 00:14:09.401 }, 00:14:09.401 { 00:14:09.401 "name": "BaseBdev4", 00:14:09.401 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:09.401 "is_configured": true, 00:14:09.401 "data_offset": 0, 00:14:09.401 "data_size": 65536 00:14:09.401 } 00:14:09.401 ] 00:14:09.401 }' 00:14:09.401 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.401 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 [2024-12-06 18:11:21.892048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:09.970 18:11:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:10.230 [2024-12-06 18:11:22.243346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:10.230 /dev/nbd0 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.230 1+0 records in 00:14:10.230 1+0 records out 00:14:10.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530954 s, 7.7 MB/s 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:10.230 18:11:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:18.354 65536+0 records in 00:14:18.354 65536+0 records out 00:14:18.354 33554432 bytes (34 MB, 32 MiB) copied, 7.10576 s, 4.7 MB/s 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:18.354 [2024-12-06 18:11:29.664845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.354 [2024-12-06 18:11:29.700922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.354 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.355 "name": "raid_bdev1", 00:14:18.355 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:18.355 "strip_size_kb": 0, 00:14:18.355 "state": "online", 00:14:18.355 "raid_level": "raid1", 00:14:18.355 "superblock": false, 00:14:18.355 "num_base_bdevs": 4, 00:14:18.355 "num_base_bdevs_discovered": 3, 00:14:18.355 "num_base_bdevs_operational": 3, 00:14:18.355 "base_bdevs_list": [ 00:14:18.355 { 00:14:18.355 "name": null, 00:14:18.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.355 "is_configured": false, 00:14:18.355 "data_offset": 0, 00:14:18.355 "data_size": 65536 00:14:18.355 }, 00:14:18.355 { 00:14:18.355 "name": "BaseBdev2", 00:14:18.355 "uuid": "dcde736e-b88f-57a5-a045-fe8fa09a9736", 00:14:18.355 "is_configured": true, 00:14:18.355 "data_offset": 0, 00:14:18.355 "data_size": 65536 00:14:18.355 }, 00:14:18.355 { 00:14:18.355 "name": "BaseBdev3", 00:14:18.355 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:18.355 "is_configured": true, 00:14:18.355 "data_offset": 0, 00:14:18.355 "data_size": 65536 00:14:18.355 }, 00:14:18.355 { 00:14:18.355 "name": "BaseBdev4", 00:14:18.355 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:18.355 "is_configured": true, 00:14:18.355 "data_offset": 0, 00:14:18.355 "data_size": 65536 00:14:18.355 } 00:14:18.355 ] 00:14:18.355 }' 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.355 18:11:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.355 18:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.355 18:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.355 18:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.355 [2024-12-06 18:11:30.184117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.355 [2024-12-06 18:11:30.201041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:18.355 18:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.355 18:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:18.355 [2024-12-06 18:11:30.203316] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.292 "name": "raid_bdev1", 00:14:19.292 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:19.292 "strip_size_kb": 0, 00:14:19.292 "state": "online", 00:14:19.292 "raid_level": "raid1", 00:14:19.292 "superblock": false, 00:14:19.292 "num_base_bdevs": 4, 00:14:19.292 "num_base_bdevs_discovered": 4, 00:14:19.292 "num_base_bdevs_operational": 4, 00:14:19.292 "process": { 00:14:19.292 "type": "rebuild", 00:14:19.292 "target": "spare", 00:14:19.292 "progress": { 00:14:19.292 "blocks": 20480, 00:14:19.292 "percent": 31 00:14:19.292 } 00:14:19.292 }, 00:14:19.292 "base_bdevs_list": [ 00:14:19.292 { 00:14:19.292 "name": "spare", 00:14:19.292 "uuid": "891a1ae6-2038-5cce-8b5c-0a173ad283cc", 00:14:19.292 "is_configured": true, 00:14:19.292 "data_offset": 0, 00:14:19.292 "data_size": 65536 00:14:19.292 }, 00:14:19.292 { 00:14:19.292 "name": "BaseBdev2", 00:14:19.292 "uuid": "dcde736e-b88f-57a5-a045-fe8fa09a9736", 00:14:19.292 "is_configured": true, 00:14:19.292 "data_offset": 0, 00:14:19.292 "data_size": 65536 00:14:19.292 }, 00:14:19.292 { 00:14:19.292 "name": "BaseBdev3", 00:14:19.292 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:19.292 "is_configured": true, 00:14:19.292 "data_offset": 0, 00:14:19.292 "data_size": 65536 00:14:19.292 }, 00:14:19.292 { 00:14:19.292 "name": "BaseBdev4", 00:14:19.292 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:19.292 "is_configured": true, 00:14:19.292 "data_offset": 0, 00:14:19.292 "data_size": 65536 00:14:19.292 } 00:14:19.292 ] 00:14:19.292 }' 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.292 [2024-12-06 18:11:31.374657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.292 [2024-12-06 18:11:31.409808] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.292 [2024-12-06 18:11:31.409914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.292 [2024-12-06 18:11:31.409939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.292 [2024-12-06 18:11:31.409955] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.292 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.293 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.293 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.293 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.293 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.293 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.293 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.293 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.614 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.614 "name": "raid_bdev1", 00:14:19.614 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:19.614 "strip_size_kb": 0, 00:14:19.614 "state": "online", 00:14:19.614 "raid_level": "raid1", 00:14:19.614 "superblock": false, 00:14:19.614 "num_base_bdevs": 4, 00:14:19.614 "num_base_bdevs_discovered": 3, 00:14:19.614 "num_base_bdevs_operational": 3, 00:14:19.614 "base_bdevs_list": [ 00:14:19.614 { 00:14:19.614 "name": null, 00:14:19.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.614 "is_configured": false, 00:14:19.614 "data_offset": 0, 00:14:19.614 "data_size": 65536 00:14:19.614 }, 00:14:19.614 { 00:14:19.614 "name": "BaseBdev2", 00:14:19.614 "uuid": "dcde736e-b88f-57a5-a045-fe8fa09a9736", 00:14:19.614 "is_configured": true, 00:14:19.614 "data_offset": 0, 00:14:19.614 "data_size": 65536 00:14:19.614 }, 00:14:19.614 { 00:14:19.614 "name": "BaseBdev3", 00:14:19.614 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:19.614 "is_configured": true, 00:14:19.614 "data_offset": 0, 00:14:19.614 "data_size": 65536 00:14:19.614 }, 00:14:19.614 { 00:14:19.614 "name": "BaseBdev4", 00:14:19.614 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:19.614 "is_configured": true, 00:14:19.614 "data_offset": 0, 00:14:19.614 "data_size": 65536 00:14:19.614 } 00:14:19.614 ] 00:14:19.614 }' 00:14:19.614 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.614 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.873 18:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.873 18:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.873 "name": "raid_bdev1", 00:14:19.873 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:19.873 "strip_size_kb": 0, 00:14:19.873 "state": "online", 00:14:19.873 "raid_level": "raid1", 00:14:19.873 "superblock": false, 00:14:19.873 "num_base_bdevs": 4, 00:14:19.873 "num_base_bdevs_discovered": 3, 00:14:19.873 "num_base_bdevs_operational": 3, 00:14:19.873 "base_bdevs_list": [ 00:14:19.873 { 00:14:19.873 "name": null, 00:14:19.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.873 "is_configured": false, 00:14:19.873 "data_offset": 0, 00:14:19.873 "data_size": 65536 00:14:19.873 }, 00:14:19.873 { 00:14:19.873 "name": "BaseBdev2", 00:14:19.873 "uuid": "dcde736e-b88f-57a5-a045-fe8fa09a9736", 00:14:19.873 "is_configured": true, 00:14:19.873 "data_offset": 0, 00:14:19.873 "data_size": 65536 00:14:19.873 }, 00:14:19.873 { 00:14:19.873 "name": "BaseBdev3", 00:14:19.873 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:19.873 "is_configured": true, 00:14:19.873 "data_offset": 0, 00:14:19.873 "data_size": 65536 00:14:19.873 }, 00:14:19.873 { 00:14:19.873 "name": "BaseBdev4", 00:14:19.873 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:19.873 "is_configured": true, 00:14:19.873 "data_offset": 0, 00:14:19.873 "data_size": 65536 00:14:19.873 } 00:14:19.873 ] 00:14:19.873 }' 00:14:19.873 18:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.132 18:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.132 18:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.132 18:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.132 18:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:20.132 18:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.132 18:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.132 [2024-12-06 18:11:32.107394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.132 [2024-12-06 18:11:32.125513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:20.132 18:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.132 18:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:20.132 [2024-12-06 18:11:32.127825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.065 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.065 "name": "raid_bdev1", 00:14:21.065 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:21.065 "strip_size_kb": 0, 00:14:21.065 "state": "online", 00:14:21.065 "raid_level": "raid1", 00:14:21.065 "superblock": false, 00:14:21.065 "num_base_bdevs": 4, 00:14:21.065 "num_base_bdevs_discovered": 4, 00:14:21.065 "num_base_bdevs_operational": 4, 00:14:21.065 "process": { 00:14:21.065 "type": "rebuild", 00:14:21.065 "target": "spare", 00:14:21.065 "progress": { 00:14:21.065 "blocks": 20480, 00:14:21.065 "percent": 31 00:14:21.065 } 00:14:21.065 }, 00:14:21.065 "base_bdevs_list": [ 00:14:21.065 { 00:14:21.065 "name": "spare", 00:14:21.065 "uuid": "891a1ae6-2038-5cce-8b5c-0a173ad283cc", 00:14:21.065 "is_configured": true, 00:14:21.065 "data_offset": 0, 00:14:21.065 "data_size": 65536 00:14:21.065 }, 00:14:21.065 { 00:14:21.065 "name": "BaseBdev2", 00:14:21.065 "uuid": "dcde736e-b88f-57a5-a045-fe8fa09a9736", 00:14:21.065 "is_configured": true, 00:14:21.065 "data_offset": 0, 00:14:21.065 "data_size": 65536 00:14:21.065 }, 00:14:21.065 { 00:14:21.065 "name": "BaseBdev3", 00:14:21.065 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:21.065 "is_configured": true, 00:14:21.065 "data_offset": 0, 00:14:21.065 "data_size": 65536 00:14:21.065 }, 00:14:21.065 { 00:14:21.065 "name": "BaseBdev4", 00:14:21.065 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:21.065 "is_configured": true, 00:14:21.065 "data_offset": 0, 00:14:21.066 "data_size": 65536 00:14:21.066 } 00:14:21.066 ] 00:14:21.066 }' 00:14:21.066 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.325 [2024-12-06 18:11:33.299333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.325 [2024-12-06 18:11:33.334271] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.325 "name": "raid_bdev1", 00:14:21.325 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:21.325 "strip_size_kb": 0, 00:14:21.325 "state": "online", 00:14:21.325 "raid_level": "raid1", 00:14:21.325 "superblock": false, 00:14:21.325 "num_base_bdevs": 4, 00:14:21.325 "num_base_bdevs_discovered": 3, 00:14:21.325 "num_base_bdevs_operational": 3, 00:14:21.325 "process": { 00:14:21.325 "type": "rebuild", 00:14:21.325 "target": "spare", 00:14:21.325 "progress": { 00:14:21.325 "blocks": 24576, 00:14:21.325 "percent": 37 00:14:21.325 } 00:14:21.325 }, 00:14:21.325 "base_bdevs_list": [ 00:14:21.325 { 00:14:21.325 "name": "spare", 00:14:21.325 "uuid": "891a1ae6-2038-5cce-8b5c-0a173ad283cc", 00:14:21.325 "is_configured": true, 00:14:21.325 "data_offset": 0, 00:14:21.325 "data_size": 65536 00:14:21.325 }, 00:14:21.325 { 00:14:21.325 "name": null, 00:14:21.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.325 "is_configured": false, 00:14:21.325 "data_offset": 0, 00:14:21.325 "data_size": 65536 00:14:21.325 }, 00:14:21.325 { 00:14:21.325 "name": "BaseBdev3", 00:14:21.325 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:21.325 "is_configured": true, 00:14:21.325 "data_offset": 0, 00:14:21.325 "data_size": 65536 00:14:21.325 }, 00:14:21.325 { 00:14:21.325 "name": "BaseBdev4", 00:14:21.325 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:21.325 "is_configured": true, 00:14:21.325 "data_offset": 0, 00:14:21.325 "data_size": 65536 00:14:21.325 } 00:14:21.325 ] 00:14:21.325 }' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=467 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.325 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.585 18:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.585 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.585 "name": "raid_bdev1", 00:14:21.585 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:21.585 "strip_size_kb": 0, 00:14:21.585 "state": "online", 00:14:21.585 "raid_level": "raid1", 00:14:21.585 "superblock": false, 00:14:21.585 "num_base_bdevs": 4, 00:14:21.585 "num_base_bdevs_discovered": 3, 00:14:21.585 "num_base_bdevs_operational": 3, 00:14:21.585 "process": { 00:14:21.585 "type": "rebuild", 00:14:21.585 "target": "spare", 00:14:21.585 "progress": { 00:14:21.585 "blocks": 26624, 00:14:21.585 "percent": 40 00:14:21.585 } 00:14:21.585 }, 00:14:21.585 "base_bdevs_list": [ 00:14:21.585 { 00:14:21.585 "name": "spare", 00:14:21.585 "uuid": "891a1ae6-2038-5cce-8b5c-0a173ad283cc", 00:14:21.585 "is_configured": true, 00:14:21.585 "data_offset": 0, 00:14:21.585 "data_size": 65536 00:14:21.585 }, 00:14:21.585 { 00:14:21.585 "name": null, 00:14:21.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.585 "is_configured": false, 00:14:21.585 "data_offset": 0, 00:14:21.585 "data_size": 65536 00:14:21.585 }, 00:14:21.585 { 00:14:21.585 "name": "BaseBdev3", 00:14:21.585 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:21.585 "is_configured": true, 00:14:21.585 "data_offset": 0, 00:14:21.585 "data_size": 65536 00:14:21.585 }, 00:14:21.585 { 00:14:21.585 "name": "BaseBdev4", 00:14:21.585 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:21.585 "is_configured": true, 00:14:21.585 "data_offset": 0, 00:14:21.585 "data_size": 65536 00:14:21.585 } 00:14:21.585 ] 00:14:21.585 }' 00:14:21.585 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.585 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.585 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.585 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.585 18:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.526 "name": "raid_bdev1", 00:14:22.526 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:22.526 "strip_size_kb": 0, 00:14:22.526 "state": "online", 00:14:22.526 "raid_level": "raid1", 00:14:22.526 "superblock": false, 00:14:22.526 "num_base_bdevs": 4, 00:14:22.526 "num_base_bdevs_discovered": 3, 00:14:22.526 "num_base_bdevs_operational": 3, 00:14:22.526 "process": { 00:14:22.526 "type": "rebuild", 00:14:22.526 "target": "spare", 00:14:22.526 "progress": { 00:14:22.526 "blocks": 49152, 00:14:22.526 "percent": 75 00:14:22.526 } 00:14:22.526 }, 00:14:22.526 "base_bdevs_list": [ 00:14:22.526 { 00:14:22.526 "name": "spare", 00:14:22.526 "uuid": "891a1ae6-2038-5cce-8b5c-0a173ad283cc", 00:14:22.526 "is_configured": true, 00:14:22.526 "data_offset": 0, 00:14:22.526 "data_size": 65536 00:14:22.526 }, 00:14:22.526 { 00:14:22.526 "name": null, 00:14:22.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.526 "is_configured": false, 00:14:22.526 "data_offset": 0, 00:14:22.526 "data_size": 65536 00:14:22.526 }, 00:14:22.526 { 00:14:22.526 "name": "BaseBdev3", 00:14:22.526 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:22.526 "is_configured": true, 00:14:22.526 "data_offset": 0, 00:14:22.526 "data_size": 65536 00:14:22.526 }, 00:14:22.526 { 00:14:22.526 "name": "BaseBdev4", 00:14:22.526 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:22.526 "is_configured": true, 00:14:22.526 "data_offset": 0, 00:14:22.526 "data_size": 65536 00:14:22.526 } 00:14:22.526 ] 00:14:22.526 }' 00:14:22.526 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.786 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.786 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.786 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.786 18:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.357 [2024-12-06 18:11:35.345348] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:23.357 [2024-12-06 18:11:35.345471] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:23.357 [2024-12-06 18:11:35.345553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.616 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.874 "name": "raid_bdev1", 00:14:23.874 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:23.874 "strip_size_kb": 0, 00:14:23.874 "state": "online", 00:14:23.874 "raid_level": "raid1", 00:14:23.874 "superblock": false, 00:14:23.874 "num_base_bdevs": 4, 00:14:23.874 "num_base_bdevs_discovered": 3, 00:14:23.874 "num_base_bdevs_operational": 3, 00:14:23.874 "base_bdevs_list": [ 00:14:23.874 { 00:14:23.874 "name": "spare", 00:14:23.874 "uuid": "891a1ae6-2038-5cce-8b5c-0a173ad283cc", 00:14:23.874 "is_configured": true, 00:14:23.874 "data_offset": 0, 00:14:23.874 "data_size": 65536 00:14:23.874 }, 00:14:23.874 { 00:14:23.874 "name": null, 00:14:23.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.874 "is_configured": false, 00:14:23.874 "data_offset": 0, 00:14:23.874 "data_size": 65536 00:14:23.874 }, 00:14:23.874 { 00:14:23.874 "name": "BaseBdev3", 00:14:23.874 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:23.874 "is_configured": true, 00:14:23.874 "data_offset": 0, 00:14:23.874 "data_size": 65536 00:14:23.874 }, 00:14:23.874 { 00:14:23.874 "name": "BaseBdev4", 00:14:23.874 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:23.874 "is_configured": true, 00:14:23.874 "data_offset": 0, 00:14:23.874 "data_size": 65536 00:14:23.874 } 00:14:23.874 ] 00:14:23.874 }' 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.874 18:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.875 18:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.875 18:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.875 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.875 "name": "raid_bdev1", 00:14:23.875 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:23.875 "strip_size_kb": 0, 00:14:23.875 "state": "online", 00:14:23.875 "raid_level": "raid1", 00:14:23.875 "superblock": false, 00:14:23.875 "num_base_bdevs": 4, 00:14:23.875 "num_base_bdevs_discovered": 3, 00:14:23.875 "num_base_bdevs_operational": 3, 00:14:23.875 "base_bdevs_list": [ 00:14:23.875 { 00:14:23.875 "name": "spare", 00:14:23.875 "uuid": "891a1ae6-2038-5cce-8b5c-0a173ad283cc", 00:14:23.875 "is_configured": true, 00:14:23.875 "data_offset": 0, 00:14:23.875 "data_size": 65536 00:14:23.875 }, 00:14:23.875 { 00:14:23.875 "name": null, 00:14:23.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.875 "is_configured": false, 00:14:23.875 "data_offset": 0, 00:14:23.875 "data_size": 65536 00:14:23.875 }, 00:14:23.875 { 00:14:23.875 "name": "BaseBdev3", 00:14:23.875 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:23.875 "is_configured": true, 00:14:23.875 "data_offset": 0, 00:14:23.875 "data_size": 65536 00:14:23.875 }, 00:14:23.875 { 00:14:23.875 "name": "BaseBdev4", 00:14:23.875 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:23.875 "is_configured": true, 00:14:23.875 "data_offset": 0, 00:14:23.875 "data_size": 65536 00:14:23.875 } 00:14:23.875 ] 00:14:23.875 }' 00:14:23.875 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.875 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.875 18:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.133 "name": "raid_bdev1", 00:14:24.133 "uuid": "1ad5f537-98c7-4029-bf2c-9578d18badc4", 00:14:24.133 "strip_size_kb": 0, 00:14:24.133 "state": "online", 00:14:24.133 "raid_level": "raid1", 00:14:24.133 "superblock": false, 00:14:24.133 "num_base_bdevs": 4, 00:14:24.133 "num_base_bdevs_discovered": 3, 00:14:24.133 "num_base_bdevs_operational": 3, 00:14:24.133 "base_bdevs_list": [ 00:14:24.133 { 00:14:24.133 "name": "spare", 00:14:24.133 "uuid": "891a1ae6-2038-5cce-8b5c-0a173ad283cc", 00:14:24.133 "is_configured": true, 00:14:24.133 "data_offset": 0, 00:14:24.133 "data_size": 65536 00:14:24.133 }, 00:14:24.133 { 00:14:24.133 "name": null, 00:14:24.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.133 "is_configured": false, 00:14:24.133 "data_offset": 0, 00:14:24.133 "data_size": 65536 00:14:24.133 }, 00:14:24.133 { 00:14:24.133 "name": "BaseBdev3", 00:14:24.133 "uuid": "ec338771-f3a5-5f08-a49c-6005865e4de4", 00:14:24.133 "is_configured": true, 00:14:24.133 "data_offset": 0, 00:14:24.133 "data_size": 65536 00:14:24.133 }, 00:14:24.133 { 00:14:24.133 "name": "BaseBdev4", 00:14:24.133 "uuid": "f352bd77-96ce-592f-a29c-6454033173a1", 00:14:24.133 "is_configured": true, 00:14:24.133 "data_offset": 0, 00:14:24.133 "data_size": 65536 00:14:24.133 } 00:14:24.133 ] 00:14:24.133 }' 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.133 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.392 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.392 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.392 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.392 [2024-12-06 18:11:36.521328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.392 [2024-12-06 18:11:36.521372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.392 [2024-12-06 18:11:36.521480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.392 [2024-12-06 18:11:36.521584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.392 [2024-12-06 18:11:36.521604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.392 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.392 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:24.392 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.392 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.392 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.392 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.652 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:24.912 /dev/nbd0 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.912 1+0 records in 00:14:24.912 1+0 records out 00:14:24.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036301 s, 11.3 MB/s 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.912 18:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:25.171 /dev/nbd1 00:14:25.171 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.171 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.171 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:25.171 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:25.171 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.171 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.171 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:25.171 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:25.171 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.172 1+0 records in 00:14:25.172 1+0 records out 00:14:25.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484837 s, 8.4 MB/s 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.172 18:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:25.431 18:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:25.431 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.431 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.431 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.431 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:25.431 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.431 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.691 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78048 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78048 ']' 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78048 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78048 00:14:25.952 killing process with pid 78048 00:14:25.952 Received shutdown signal, test time was about 60.000000 seconds 00:14:25.952 00:14:25.952 Latency(us) 00:14:25.952 [2024-12-06T18:11:38.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.952 [2024-12-06T18:11:38.120Z] =================================================================================================================== 00:14:25.952 [2024-12-06T18:11:38.120Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78048' 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78048 00:14:25.952 [2024-12-06 18:11:37.951938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.952 18:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78048 00:14:26.520 [2024-12-06 18:11:38.547164] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:27.900 00:14:27.900 real 0m19.781s 00:14:27.900 user 0m22.235s 00:14:27.900 sys 0m3.593s 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.900 ************************************ 00:14:27.900 END TEST raid_rebuild_test 00:14:27.900 ************************************ 00:14:27.900 18:11:39 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:27.900 18:11:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:27.900 18:11:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.900 18:11:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.900 ************************************ 00:14:27.900 START TEST raid_rebuild_test_sb 00:14:27.900 ************************************ 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.900 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78523 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78523 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78523 ']' 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.901 18:11:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.901 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:27.901 Zero copy mechanism will not be used. 00:14:27.901 [2024-12-06 18:11:39.936838] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:14:27.901 [2024-12-06 18:11:39.936953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78523 ] 00:14:28.160 [2024-12-06 18:11:40.112613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.160 [2024-12-06 18:11:40.233024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.420 [2024-12-06 18:11:40.451924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.420 [2024-12-06 18:11:40.451975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.680 BaseBdev1_malloc 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.680 [2024-12-06 18:11:40.835035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:28.680 [2024-12-06 18:11:40.835114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.680 [2024-12-06 18:11:40.835141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:28.680 [2024-12-06 18:11:40.835153] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.680 [2024-12-06 18:11:40.837562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.680 [2024-12-06 18:11:40.837603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:28.680 BaseBdev1 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.680 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.940 BaseBdev2_malloc 00:14:28.940 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.940 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:28.940 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 [2024-12-06 18:11:40.897962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:28.941 [2024-12-06 18:11:40.898037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.941 [2024-12-06 18:11:40.898077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:28.941 [2024-12-06 18:11:40.898091] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.941 [2024-12-06 18:11:40.900621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.941 [2024-12-06 18:11:40.900666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:28.941 BaseBdev2 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 BaseBdev3_malloc 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 [2024-12-06 18:11:40.962755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:28.941 [2024-12-06 18:11:40.962811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.941 [2024-12-06 18:11:40.962836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:28.941 [2024-12-06 18:11:40.962849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.941 [2024-12-06 18:11:40.965174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.941 [2024-12-06 18:11:40.965211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:28.941 BaseBdev3 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.941 18:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 BaseBdev4_malloc 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 [2024-12-06 18:11:41.019998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:28.941 [2024-12-06 18:11:41.020076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.941 [2024-12-06 18:11:41.020101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:28.941 [2024-12-06 18:11:41.020114] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.941 [2024-12-06 18:11:41.022401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.941 [2024-12-06 18:11:41.022440] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:28.941 BaseBdev4 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 spare_malloc 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 spare_delay 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 [2024-12-06 18:11:41.090538] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:28.941 [2024-12-06 18:11:41.090598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.941 [2024-12-06 18:11:41.090631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:28.941 [2024-12-06 18:11:41.090643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.941 [2024-12-06 18:11:41.093094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.941 [2024-12-06 18:11:41.093131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:28.941 spare 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.941 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 [2024-12-06 18:11:41.102595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.941 [2024-12-06 18:11:41.104712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.941 [2024-12-06 18:11:41.104791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.941 [2024-12-06 18:11:41.104855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:28.941 [2024-12-06 18:11:41.105092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:28.941 [2024-12-06 18:11:41.105116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:28.941 [2024-12-06 18:11:41.105421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:28.941 [2024-12-06 18:11:41.105632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:28.941 [2024-12-06 18:11:41.105653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:28.941 [2024-12-06 18:11:41.105838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.201 "name": "raid_bdev1", 00:14:29.201 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:29.201 "strip_size_kb": 0, 00:14:29.201 "state": "online", 00:14:29.201 "raid_level": "raid1", 00:14:29.201 "superblock": true, 00:14:29.201 "num_base_bdevs": 4, 00:14:29.201 "num_base_bdevs_discovered": 4, 00:14:29.201 "num_base_bdevs_operational": 4, 00:14:29.201 "base_bdevs_list": [ 00:14:29.201 { 00:14:29.201 "name": "BaseBdev1", 00:14:29.201 "uuid": "74cdf4b5-6eb0-5115-ac82-cf54b88ebb12", 00:14:29.201 "is_configured": true, 00:14:29.201 "data_offset": 2048, 00:14:29.201 "data_size": 63488 00:14:29.201 }, 00:14:29.201 { 00:14:29.201 "name": "BaseBdev2", 00:14:29.201 "uuid": "670ac55b-92ce-5a6e-b91c-40825722a01c", 00:14:29.201 "is_configured": true, 00:14:29.201 "data_offset": 2048, 00:14:29.201 "data_size": 63488 00:14:29.201 }, 00:14:29.201 { 00:14:29.201 "name": "BaseBdev3", 00:14:29.201 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:29.201 "is_configured": true, 00:14:29.201 "data_offset": 2048, 00:14:29.201 "data_size": 63488 00:14:29.201 }, 00:14:29.201 { 00:14:29.201 "name": "BaseBdev4", 00:14:29.201 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:29.201 "is_configured": true, 00:14:29.201 "data_offset": 2048, 00:14:29.201 "data_size": 63488 00:14:29.201 } 00:14:29.201 ] 00:14:29.201 }' 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.201 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.461 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.461 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:29.461 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.461 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.461 [2024-12-06 18:11:41.602196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.461 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.721 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:29.721 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:29.721 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.721 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.722 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:29.981 [2024-12-06 18:11:41.917309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:29.981 /dev/nbd0 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.981 1+0 records in 00:14:29.981 1+0 records out 00:14:29.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271646 s, 15.1 MB/s 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:29.981 18:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:36.551 63488+0 records in 00:14:36.552 63488+0 records out 00:14:36.552 32505856 bytes (33 MB, 31 MiB) copied, 6.06647 s, 5.4 MB/s 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:36.552 [2024-12-06 18:11:48.267827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.552 [2024-12-06 18:11:48.316583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.552 "name": "raid_bdev1", 00:14:36.552 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:36.552 "strip_size_kb": 0, 00:14:36.552 "state": "online", 00:14:36.552 "raid_level": "raid1", 00:14:36.552 "superblock": true, 00:14:36.552 "num_base_bdevs": 4, 00:14:36.552 "num_base_bdevs_discovered": 3, 00:14:36.552 "num_base_bdevs_operational": 3, 00:14:36.552 "base_bdevs_list": [ 00:14:36.552 { 00:14:36.552 "name": null, 00:14:36.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.552 "is_configured": false, 00:14:36.552 "data_offset": 0, 00:14:36.552 "data_size": 63488 00:14:36.552 }, 00:14:36.552 { 00:14:36.552 "name": "BaseBdev2", 00:14:36.552 "uuid": "670ac55b-92ce-5a6e-b91c-40825722a01c", 00:14:36.552 "is_configured": true, 00:14:36.552 "data_offset": 2048, 00:14:36.552 "data_size": 63488 00:14:36.552 }, 00:14:36.552 { 00:14:36.552 "name": "BaseBdev3", 00:14:36.552 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:36.552 "is_configured": true, 00:14:36.552 "data_offset": 2048, 00:14:36.552 "data_size": 63488 00:14:36.552 }, 00:14:36.552 { 00:14:36.552 "name": "BaseBdev4", 00:14:36.552 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:36.552 "is_configured": true, 00:14:36.552 "data_offset": 2048, 00:14:36.552 "data_size": 63488 00:14:36.552 } 00:14:36.552 ] 00:14:36.552 }' 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.552 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.812 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.812 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.812 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.812 [2024-12-06 18:11:48.795791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.812 [2024-12-06 18:11:48.812740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:36.812 18:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.812 18:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:36.812 [2024-12-06 18:11:48.814852] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.746 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.746 "name": "raid_bdev1", 00:14:37.746 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:37.746 "strip_size_kb": 0, 00:14:37.746 "state": "online", 00:14:37.746 "raid_level": "raid1", 00:14:37.746 "superblock": true, 00:14:37.746 "num_base_bdevs": 4, 00:14:37.746 "num_base_bdevs_discovered": 4, 00:14:37.746 "num_base_bdevs_operational": 4, 00:14:37.746 "process": { 00:14:37.746 "type": "rebuild", 00:14:37.746 "target": "spare", 00:14:37.746 "progress": { 00:14:37.746 "blocks": 20480, 00:14:37.746 "percent": 32 00:14:37.746 } 00:14:37.746 }, 00:14:37.747 "base_bdevs_list": [ 00:14:37.747 { 00:14:37.747 "name": "spare", 00:14:37.747 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:37.747 "is_configured": true, 00:14:37.747 "data_offset": 2048, 00:14:37.747 "data_size": 63488 00:14:37.747 }, 00:14:37.747 { 00:14:37.747 "name": "BaseBdev2", 00:14:37.747 "uuid": "670ac55b-92ce-5a6e-b91c-40825722a01c", 00:14:37.747 "is_configured": true, 00:14:37.747 "data_offset": 2048, 00:14:37.747 "data_size": 63488 00:14:37.747 }, 00:14:37.747 { 00:14:37.747 "name": "BaseBdev3", 00:14:37.747 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:37.747 "is_configured": true, 00:14:37.747 "data_offset": 2048, 00:14:37.747 "data_size": 63488 00:14:37.747 }, 00:14:37.747 { 00:14:37.747 "name": "BaseBdev4", 00:14:37.747 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:37.747 "is_configured": true, 00:14:37.747 "data_offset": 2048, 00:14:37.747 "data_size": 63488 00:14:37.747 } 00:14:37.747 ] 00:14:37.747 }' 00:14:37.747 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.007 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.007 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.007 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.007 18:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.007 18:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.007 18:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.007 [2024-12-06 18:11:49.970558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.007 [2024-12-06 18:11:50.021097] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.007 [2024-12-06 18:11:50.021187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.007 [2024-12-06 18:11:50.021209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.007 [2024-12-06 18:11:50.021221] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.007 "name": "raid_bdev1", 00:14:38.007 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:38.007 "strip_size_kb": 0, 00:14:38.007 "state": "online", 00:14:38.007 "raid_level": "raid1", 00:14:38.007 "superblock": true, 00:14:38.007 "num_base_bdevs": 4, 00:14:38.007 "num_base_bdevs_discovered": 3, 00:14:38.007 "num_base_bdevs_operational": 3, 00:14:38.007 "base_bdevs_list": [ 00:14:38.007 { 00:14:38.007 "name": null, 00:14:38.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.007 "is_configured": false, 00:14:38.007 "data_offset": 0, 00:14:38.007 "data_size": 63488 00:14:38.007 }, 00:14:38.007 { 00:14:38.007 "name": "BaseBdev2", 00:14:38.007 "uuid": "670ac55b-92ce-5a6e-b91c-40825722a01c", 00:14:38.007 "is_configured": true, 00:14:38.007 "data_offset": 2048, 00:14:38.007 "data_size": 63488 00:14:38.007 }, 00:14:38.007 { 00:14:38.007 "name": "BaseBdev3", 00:14:38.007 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:38.007 "is_configured": true, 00:14:38.007 "data_offset": 2048, 00:14:38.007 "data_size": 63488 00:14:38.007 }, 00:14:38.007 { 00:14:38.007 "name": "BaseBdev4", 00:14:38.007 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:38.007 "is_configured": true, 00:14:38.007 "data_offset": 2048, 00:14:38.007 "data_size": 63488 00:14:38.007 } 00:14:38.007 ] 00:14:38.007 }' 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.007 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.575 "name": "raid_bdev1", 00:14:38.575 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:38.575 "strip_size_kb": 0, 00:14:38.575 "state": "online", 00:14:38.575 "raid_level": "raid1", 00:14:38.575 "superblock": true, 00:14:38.575 "num_base_bdevs": 4, 00:14:38.575 "num_base_bdevs_discovered": 3, 00:14:38.575 "num_base_bdevs_operational": 3, 00:14:38.575 "base_bdevs_list": [ 00:14:38.575 { 00:14:38.575 "name": null, 00:14:38.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.575 "is_configured": false, 00:14:38.575 "data_offset": 0, 00:14:38.575 "data_size": 63488 00:14:38.575 }, 00:14:38.575 { 00:14:38.575 "name": "BaseBdev2", 00:14:38.575 "uuid": "670ac55b-92ce-5a6e-b91c-40825722a01c", 00:14:38.575 "is_configured": true, 00:14:38.575 "data_offset": 2048, 00:14:38.575 "data_size": 63488 00:14:38.575 }, 00:14:38.575 { 00:14:38.575 "name": "BaseBdev3", 00:14:38.575 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:38.575 "is_configured": true, 00:14:38.575 "data_offset": 2048, 00:14:38.575 "data_size": 63488 00:14:38.575 }, 00:14:38.575 { 00:14:38.575 "name": "BaseBdev4", 00:14:38.575 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:38.575 "is_configured": true, 00:14:38.575 "data_offset": 2048, 00:14:38.575 "data_size": 63488 00:14:38.575 } 00:14:38.575 ] 00:14:38.575 }' 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.575 [2024-12-06 18:11:50.683404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.575 [2024-12-06 18:11:50.701904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.575 18:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:38.575 [2024-12-06 18:11:50.704238] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.971 "name": "raid_bdev1", 00:14:39.971 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:39.971 "strip_size_kb": 0, 00:14:39.971 "state": "online", 00:14:39.971 "raid_level": "raid1", 00:14:39.971 "superblock": true, 00:14:39.971 "num_base_bdevs": 4, 00:14:39.971 "num_base_bdevs_discovered": 4, 00:14:39.971 "num_base_bdevs_operational": 4, 00:14:39.971 "process": { 00:14:39.971 "type": "rebuild", 00:14:39.971 "target": "spare", 00:14:39.971 "progress": { 00:14:39.971 "blocks": 20480, 00:14:39.971 "percent": 32 00:14:39.971 } 00:14:39.971 }, 00:14:39.971 "base_bdevs_list": [ 00:14:39.971 { 00:14:39.971 "name": "spare", 00:14:39.971 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:39.971 "is_configured": true, 00:14:39.971 "data_offset": 2048, 00:14:39.971 "data_size": 63488 00:14:39.971 }, 00:14:39.971 { 00:14:39.971 "name": "BaseBdev2", 00:14:39.971 "uuid": "670ac55b-92ce-5a6e-b91c-40825722a01c", 00:14:39.971 "is_configured": true, 00:14:39.971 "data_offset": 2048, 00:14:39.971 "data_size": 63488 00:14:39.971 }, 00:14:39.971 { 00:14:39.971 "name": "BaseBdev3", 00:14:39.971 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:39.971 "is_configured": true, 00:14:39.971 "data_offset": 2048, 00:14:39.971 "data_size": 63488 00:14:39.971 }, 00:14:39.971 { 00:14:39.971 "name": "BaseBdev4", 00:14:39.971 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:39.971 "is_configured": true, 00:14:39.971 "data_offset": 2048, 00:14:39.971 "data_size": 63488 00:14:39.971 } 00:14:39.971 ] 00:14:39.971 }' 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:39.971 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.971 18:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 [2024-12-06 18:11:51.867512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.971 [2024-12-06 18:11:52.010417] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.971 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.971 "name": "raid_bdev1", 00:14:39.971 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:39.971 "strip_size_kb": 0, 00:14:39.971 "state": "online", 00:14:39.971 "raid_level": "raid1", 00:14:39.972 "superblock": true, 00:14:39.972 "num_base_bdevs": 4, 00:14:39.972 "num_base_bdevs_discovered": 3, 00:14:39.972 "num_base_bdevs_operational": 3, 00:14:39.972 "process": { 00:14:39.972 "type": "rebuild", 00:14:39.972 "target": "spare", 00:14:39.972 "progress": { 00:14:39.972 "blocks": 24576, 00:14:39.972 "percent": 38 00:14:39.972 } 00:14:39.972 }, 00:14:39.972 "base_bdevs_list": [ 00:14:39.972 { 00:14:39.972 "name": "spare", 00:14:39.972 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:39.972 "is_configured": true, 00:14:39.972 "data_offset": 2048, 00:14:39.972 "data_size": 63488 00:14:39.972 }, 00:14:39.972 { 00:14:39.972 "name": null, 00:14:39.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.972 "is_configured": false, 00:14:39.972 "data_offset": 0, 00:14:39.972 "data_size": 63488 00:14:39.972 }, 00:14:39.972 { 00:14:39.972 "name": "BaseBdev3", 00:14:39.972 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:39.972 "is_configured": true, 00:14:39.972 "data_offset": 2048, 00:14:39.972 "data_size": 63488 00:14:39.972 }, 00:14:39.972 { 00:14:39.972 "name": "BaseBdev4", 00:14:39.972 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:39.972 "is_configured": true, 00:14:39.972 "data_offset": 2048, 00:14:39.972 "data_size": 63488 00:14:39.972 } 00:14:39.972 ] 00:14:39.972 }' 00:14:39.972 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.972 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.972 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.231 "name": "raid_bdev1", 00:14:40.231 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:40.231 "strip_size_kb": 0, 00:14:40.231 "state": "online", 00:14:40.231 "raid_level": "raid1", 00:14:40.231 "superblock": true, 00:14:40.231 "num_base_bdevs": 4, 00:14:40.231 "num_base_bdevs_discovered": 3, 00:14:40.231 "num_base_bdevs_operational": 3, 00:14:40.231 "process": { 00:14:40.231 "type": "rebuild", 00:14:40.231 "target": "spare", 00:14:40.231 "progress": { 00:14:40.231 "blocks": 26624, 00:14:40.231 "percent": 41 00:14:40.231 } 00:14:40.231 }, 00:14:40.231 "base_bdevs_list": [ 00:14:40.231 { 00:14:40.231 "name": "spare", 00:14:40.231 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:40.231 "is_configured": true, 00:14:40.231 "data_offset": 2048, 00:14:40.231 "data_size": 63488 00:14:40.231 }, 00:14:40.231 { 00:14:40.231 "name": null, 00:14:40.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.231 "is_configured": false, 00:14:40.231 "data_offset": 0, 00:14:40.231 "data_size": 63488 00:14:40.231 }, 00:14:40.231 { 00:14:40.231 "name": "BaseBdev3", 00:14:40.231 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:40.231 "is_configured": true, 00:14:40.231 "data_offset": 2048, 00:14:40.231 "data_size": 63488 00:14:40.231 }, 00:14:40.231 { 00:14:40.231 "name": "BaseBdev4", 00:14:40.231 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:40.231 "is_configured": true, 00:14:40.231 "data_offset": 2048, 00:14:40.231 "data_size": 63488 00:14:40.231 } 00:14:40.231 ] 00:14:40.231 }' 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.231 18:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.166 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.166 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.166 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.166 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.166 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.166 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.425 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.425 18:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.425 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.425 18:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.425 18:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.425 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.425 "name": "raid_bdev1", 00:14:41.425 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:41.425 "strip_size_kb": 0, 00:14:41.425 "state": "online", 00:14:41.425 "raid_level": "raid1", 00:14:41.425 "superblock": true, 00:14:41.425 "num_base_bdevs": 4, 00:14:41.425 "num_base_bdevs_discovered": 3, 00:14:41.425 "num_base_bdevs_operational": 3, 00:14:41.425 "process": { 00:14:41.425 "type": "rebuild", 00:14:41.425 "target": "spare", 00:14:41.425 "progress": { 00:14:41.425 "blocks": 51200, 00:14:41.425 "percent": 80 00:14:41.425 } 00:14:41.425 }, 00:14:41.425 "base_bdevs_list": [ 00:14:41.425 { 00:14:41.425 "name": "spare", 00:14:41.425 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:41.425 "is_configured": true, 00:14:41.425 "data_offset": 2048, 00:14:41.425 "data_size": 63488 00:14:41.425 }, 00:14:41.425 { 00:14:41.425 "name": null, 00:14:41.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.425 "is_configured": false, 00:14:41.425 "data_offset": 0, 00:14:41.425 "data_size": 63488 00:14:41.425 }, 00:14:41.425 { 00:14:41.425 "name": "BaseBdev3", 00:14:41.425 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:41.425 "is_configured": true, 00:14:41.425 "data_offset": 2048, 00:14:41.425 "data_size": 63488 00:14:41.425 }, 00:14:41.425 { 00:14:41.425 "name": "BaseBdev4", 00:14:41.425 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:41.425 "is_configured": true, 00:14:41.425 "data_offset": 2048, 00:14:41.426 "data_size": 63488 00:14:41.426 } 00:14:41.426 ] 00:14:41.426 }' 00:14:41.426 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.426 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.426 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.426 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.426 18:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.995 [2024-12-06 18:11:53.920370] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:41.995 [2024-12-06 18:11:53.920491] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:41.995 [2024-12-06 18:11:53.920692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.566 "name": "raid_bdev1", 00:14:42.566 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:42.566 "strip_size_kb": 0, 00:14:42.566 "state": "online", 00:14:42.566 "raid_level": "raid1", 00:14:42.566 "superblock": true, 00:14:42.566 "num_base_bdevs": 4, 00:14:42.566 "num_base_bdevs_discovered": 3, 00:14:42.566 "num_base_bdevs_operational": 3, 00:14:42.566 "base_bdevs_list": [ 00:14:42.566 { 00:14:42.566 "name": "spare", 00:14:42.566 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:42.566 "is_configured": true, 00:14:42.566 "data_offset": 2048, 00:14:42.566 "data_size": 63488 00:14:42.566 }, 00:14:42.566 { 00:14:42.566 "name": null, 00:14:42.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.566 "is_configured": false, 00:14:42.566 "data_offset": 0, 00:14:42.566 "data_size": 63488 00:14:42.566 }, 00:14:42.566 { 00:14:42.566 "name": "BaseBdev3", 00:14:42.566 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:42.566 "is_configured": true, 00:14:42.566 "data_offset": 2048, 00:14:42.566 "data_size": 63488 00:14:42.566 }, 00:14:42.566 { 00:14:42.566 "name": "BaseBdev4", 00:14:42.566 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:42.566 "is_configured": true, 00:14:42.566 "data_offset": 2048, 00:14:42.566 "data_size": 63488 00:14:42.566 } 00:14:42.566 ] 00:14:42.566 }' 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.566 "name": "raid_bdev1", 00:14:42.566 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:42.566 "strip_size_kb": 0, 00:14:42.566 "state": "online", 00:14:42.566 "raid_level": "raid1", 00:14:42.566 "superblock": true, 00:14:42.566 "num_base_bdevs": 4, 00:14:42.566 "num_base_bdevs_discovered": 3, 00:14:42.566 "num_base_bdevs_operational": 3, 00:14:42.566 "base_bdevs_list": [ 00:14:42.566 { 00:14:42.566 "name": "spare", 00:14:42.566 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:42.566 "is_configured": true, 00:14:42.566 "data_offset": 2048, 00:14:42.566 "data_size": 63488 00:14:42.566 }, 00:14:42.566 { 00:14:42.566 "name": null, 00:14:42.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.566 "is_configured": false, 00:14:42.566 "data_offset": 0, 00:14:42.566 "data_size": 63488 00:14:42.566 }, 00:14:42.566 { 00:14:42.566 "name": "BaseBdev3", 00:14:42.566 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:42.566 "is_configured": true, 00:14:42.566 "data_offset": 2048, 00:14:42.566 "data_size": 63488 00:14:42.566 }, 00:14:42.566 { 00:14:42.566 "name": "BaseBdev4", 00:14:42.566 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:42.566 "is_configured": true, 00:14:42.566 "data_offset": 2048, 00:14:42.566 "data_size": 63488 00:14:42.566 } 00:14:42.566 ] 00:14:42.566 }' 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.566 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.826 "name": "raid_bdev1", 00:14:42.826 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:42.826 "strip_size_kb": 0, 00:14:42.826 "state": "online", 00:14:42.826 "raid_level": "raid1", 00:14:42.826 "superblock": true, 00:14:42.826 "num_base_bdevs": 4, 00:14:42.826 "num_base_bdevs_discovered": 3, 00:14:42.826 "num_base_bdevs_operational": 3, 00:14:42.826 "base_bdevs_list": [ 00:14:42.826 { 00:14:42.826 "name": "spare", 00:14:42.826 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:42.826 "is_configured": true, 00:14:42.826 "data_offset": 2048, 00:14:42.826 "data_size": 63488 00:14:42.826 }, 00:14:42.826 { 00:14:42.826 "name": null, 00:14:42.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.826 "is_configured": false, 00:14:42.826 "data_offset": 0, 00:14:42.826 "data_size": 63488 00:14:42.826 }, 00:14:42.826 { 00:14:42.826 "name": "BaseBdev3", 00:14:42.826 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:42.826 "is_configured": true, 00:14:42.826 "data_offset": 2048, 00:14:42.826 "data_size": 63488 00:14:42.826 }, 00:14:42.826 { 00:14:42.826 "name": "BaseBdev4", 00:14:42.826 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:42.826 "is_configured": true, 00:14:42.826 "data_offset": 2048, 00:14:42.826 "data_size": 63488 00:14:42.826 } 00:14:42.826 ] 00:14:42.826 }' 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.826 18:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.084 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.084 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.084 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.084 [2024-12-06 18:11:55.236704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.084 [2024-12-06 18:11:55.236741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.084 [2024-12-06 18:11:55.236860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.084 [2024-12-06 18:11:55.236958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.084 [2024-12-06 18:11:55.236992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:43.085 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.085 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.085 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.085 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:43.085 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:43.344 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:43.659 /dev/nbd0 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.659 1+0 records in 00:14:43.659 1+0 records out 00:14:43.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344657 s, 11.9 MB/s 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:43.659 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:43.659 /dev/nbd1 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.939 1+0 records in 00:14:43.939 1+0 records out 00:14:43.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428975 s, 9.5 MB/s 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:43.939 18:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:43.939 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:43.939 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.939 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:43.939 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.939 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:43.939 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.939 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.199 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.459 [2024-12-06 18:11:56.455434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.459 [2024-12-06 18:11:56.455515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.459 [2024-12-06 18:11:56.455559] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:44.459 [2024-12-06 18:11:56.455579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.459 [2024-12-06 18:11:56.458100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.459 [2024-12-06 18:11:56.458141] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.459 [2024-12-06 18:11:56.458265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:44.459 [2024-12-06 18:11:56.458336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.459 [2024-12-06 18:11:56.458568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.459 [2024-12-06 18:11:56.458716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.459 spare 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.459 [2024-12-06 18:11:56.558676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:44.459 [2024-12-06 18:11:56.558733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:44.459 [2024-12-06 18:11:56.559155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:44.459 [2024-12-06 18:11:56.559398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:44.459 [2024-12-06 18:11:56.559423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:44.459 [2024-12-06 18:11:56.559675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.459 "name": "raid_bdev1", 00:14:44.459 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:44.459 "strip_size_kb": 0, 00:14:44.459 "state": "online", 00:14:44.459 "raid_level": "raid1", 00:14:44.459 "superblock": true, 00:14:44.459 "num_base_bdevs": 4, 00:14:44.459 "num_base_bdevs_discovered": 3, 00:14:44.459 "num_base_bdevs_operational": 3, 00:14:44.459 "base_bdevs_list": [ 00:14:44.459 { 00:14:44.459 "name": "spare", 00:14:44.459 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:44.459 "is_configured": true, 00:14:44.459 "data_offset": 2048, 00:14:44.459 "data_size": 63488 00:14:44.459 }, 00:14:44.459 { 00:14:44.459 "name": null, 00:14:44.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.459 "is_configured": false, 00:14:44.459 "data_offset": 2048, 00:14:44.459 "data_size": 63488 00:14:44.459 }, 00:14:44.459 { 00:14:44.459 "name": "BaseBdev3", 00:14:44.459 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:44.459 "is_configured": true, 00:14:44.459 "data_offset": 2048, 00:14:44.459 "data_size": 63488 00:14:44.459 }, 00:14:44.459 { 00:14:44.459 "name": "BaseBdev4", 00:14:44.459 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:44.459 "is_configured": true, 00:14:44.459 "data_offset": 2048, 00:14:44.459 "data_size": 63488 00:14:44.459 } 00:14:44.459 ] 00:14:44.459 }' 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.459 18:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.026 "name": "raid_bdev1", 00:14:45.026 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:45.026 "strip_size_kb": 0, 00:14:45.026 "state": "online", 00:14:45.026 "raid_level": "raid1", 00:14:45.026 "superblock": true, 00:14:45.026 "num_base_bdevs": 4, 00:14:45.026 "num_base_bdevs_discovered": 3, 00:14:45.026 "num_base_bdevs_operational": 3, 00:14:45.026 "base_bdevs_list": [ 00:14:45.026 { 00:14:45.026 "name": "spare", 00:14:45.026 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:45.026 "is_configured": true, 00:14:45.026 "data_offset": 2048, 00:14:45.026 "data_size": 63488 00:14:45.026 }, 00:14:45.026 { 00:14:45.026 "name": null, 00:14:45.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.026 "is_configured": false, 00:14:45.026 "data_offset": 2048, 00:14:45.026 "data_size": 63488 00:14:45.026 }, 00:14:45.026 { 00:14:45.026 "name": "BaseBdev3", 00:14:45.026 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:45.026 "is_configured": true, 00:14:45.026 "data_offset": 2048, 00:14:45.026 "data_size": 63488 00:14:45.026 }, 00:14:45.026 { 00:14:45.026 "name": "BaseBdev4", 00:14:45.026 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:45.026 "is_configured": true, 00:14:45.026 "data_offset": 2048, 00:14:45.026 "data_size": 63488 00:14:45.026 } 00:14:45.026 ] 00:14:45.026 }' 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.026 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.027 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.027 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.027 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.027 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:45.027 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.288 [2024-12-06 18:11:57.218490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.288 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.288 "name": "raid_bdev1", 00:14:45.288 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:45.288 "strip_size_kb": 0, 00:14:45.288 "state": "online", 00:14:45.288 "raid_level": "raid1", 00:14:45.288 "superblock": true, 00:14:45.288 "num_base_bdevs": 4, 00:14:45.288 "num_base_bdevs_discovered": 2, 00:14:45.288 "num_base_bdevs_operational": 2, 00:14:45.288 "base_bdevs_list": [ 00:14:45.288 { 00:14:45.288 "name": null, 00:14:45.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.288 "is_configured": false, 00:14:45.288 "data_offset": 0, 00:14:45.288 "data_size": 63488 00:14:45.288 }, 00:14:45.288 { 00:14:45.288 "name": null, 00:14:45.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.288 "is_configured": false, 00:14:45.288 "data_offset": 2048, 00:14:45.288 "data_size": 63488 00:14:45.288 }, 00:14:45.288 { 00:14:45.288 "name": "BaseBdev3", 00:14:45.288 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:45.288 "is_configured": true, 00:14:45.288 "data_offset": 2048, 00:14:45.288 "data_size": 63488 00:14:45.288 }, 00:14:45.288 { 00:14:45.288 "name": "BaseBdev4", 00:14:45.289 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:45.289 "is_configured": true, 00:14:45.289 "data_offset": 2048, 00:14:45.289 "data_size": 63488 00:14:45.289 } 00:14:45.289 ] 00:14:45.289 }' 00:14:45.289 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.289 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.547 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.547 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.547 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.547 [2024-12-06 18:11:57.673733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.547 [2024-12-06 18:11:57.673962] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:45.547 [2024-12-06 18:11:57.674004] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:45.547 [2024-12-06 18:11:57.674058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.547 [2024-12-06 18:11:57.688561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:45.547 18:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.547 18:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:45.547 [2024-12-06 18:11:57.690801] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.922 "name": "raid_bdev1", 00:14:46.922 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:46.922 "strip_size_kb": 0, 00:14:46.922 "state": "online", 00:14:46.922 "raid_level": "raid1", 00:14:46.922 "superblock": true, 00:14:46.922 "num_base_bdevs": 4, 00:14:46.922 "num_base_bdevs_discovered": 3, 00:14:46.922 "num_base_bdevs_operational": 3, 00:14:46.922 "process": { 00:14:46.922 "type": "rebuild", 00:14:46.922 "target": "spare", 00:14:46.922 "progress": { 00:14:46.922 "blocks": 20480, 00:14:46.922 "percent": 32 00:14:46.922 } 00:14:46.922 }, 00:14:46.922 "base_bdevs_list": [ 00:14:46.922 { 00:14:46.922 "name": "spare", 00:14:46.922 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:46.922 "is_configured": true, 00:14:46.922 "data_offset": 2048, 00:14:46.922 "data_size": 63488 00:14:46.922 }, 00:14:46.922 { 00:14:46.922 "name": null, 00:14:46.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.922 "is_configured": false, 00:14:46.922 "data_offset": 2048, 00:14:46.922 "data_size": 63488 00:14:46.922 }, 00:14:46.922 { 00:14:46.922 "name": "BaseBdev3", 00:14:46.922 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:46.922 "is_configured": true, 00:14:46.922 "data_offset": 2048, 00:14:46.922 "data_size": 63488 00:14:46.922 }, 00:14:46.922 { 00:14:46.922 "name": "BaseBdev4", 00:14:46.922 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:46.922 "is_configured": true, 00:14:46.922 "data_offset": 2048, 00:14:46.922 "data_size": 63488 00:14:46.922 } 00:14:46.922 ] 00:14:46.922 }' 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.922 [2024-12-06 18:11:58.850242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.922 [2024-12-06 18:11:58.896567] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.922 [2024-12-06 18:11:58.896659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.922 [2024-12-06 18:11:58.896690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.922 [2024-12-06 18:11:58.896708] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.922 "name": "raid_bdev1", 00:14:46.922 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:46.922 "strip_size_kb": 0, 00:14:46.922 "state": "online", 00:14:46.922 "raid_level": "raid1", 00:14:46.922 "superblock": true, 00:14:46.922 "num_base_bdevs": 4, 00:14:46.922 "num_base_bdevs_discovered": 2, 00:14:46.922 "num_base_bdevs_operational": 2, 00:14:46.922 "base_bdevs_list": [ 00:14:46.922 { 00:14:46.922 "name": null, 00:14:46.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.922 "is_configured": false, 00:14:46.922 "data_offset": 0, 00:14:46.922 "data_size": 63488 00:14:46.922 }, 00:14:46.922 { 00:14:46.922 "name": null, 00:14:46.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.922 "is_configured": false, 00:14:46.922 "data_offset": 2048, 00:14:46.922 "data_size": 63488 00:14:46.922 }, 00:14:46.922 { 00:14:46.922 "name": "BaseBdev3", 00:14:46.922 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:46.922 "is_configured": true, 00:14:46.922 "data_offset": 2048, 00:14:46.922 "data_size": 63488 00:14:46.922 }, 00:14:46.922 { 00:14:46.922 "name": "BaseBdev4", 00:14:46.922 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:46.922 "is_configured": true, 00:14:46.922 "data_offset": 2048, 00:14:46.922 "data_size": 63488 00:14:46.922 } 00:14:46.922 ] 00:14:46.922 }' 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.922 18:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.489 18:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.489 18:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.489 18:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.489 [2024-12-06 18:11:59.390638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.489 [2024-12-06 18:11:59.390731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.490 [2024-12-06 18:11:59.390792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:47.490 [2024-12-06 18:11:59.390813] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.490 [2024-12-06 18:11:59.391400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.490 [2024-12-06 18:11:59.391437] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.490 [2024-12-06 18:11:59.391568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:47.490 [2024-12-06 18:11:59.391600] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:47.490 [2024-12-06 18:11:59.391639] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:47.490 [2024-12-06 18:11:59.391680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.490 [2024-12-06 18:11:59.407969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:47.490 spare 00:14:47.490 18:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.490 18:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:47.490 [2024-12-06 18:11:59.410317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.450 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.450 "name": "raid_bdev1", 00:14:48.450 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:48.450 "strip_size_kb": 0, 00:14:48.450 "state": "online", 00:14:48.450 "raid_level": "raid1", 00:14:48.450 "superblock": true, 00:14:48.450 "num_base_bdevs": 4, 00:14:48.450 "num_base_bdevs_discovered": 3, 00:14:48.450 "num_base_bdevs_operational": 3, 00:14:48.450 "process": { 00:14:48.450 "type": "rebuild", 00:14:48.450 "target": "spare", 00:14:48.450 "progress": { 00:14:48.450 "blocks": 20480, 00:14:48.450 "percent": 32 00:14:48.450 } 00:14:48.450 }, 00:14:48.450 "base_bdevs_list": [ 00:14:48.450 { 00:14:48.450 "name": "spare", 00:14:48.450 "uuid": "f7546b13-e6ed-5553-8524-d2b0753ea14b", 00:14:48.450 "is_configured": true, 00:14:48.451 "data_offset": 2048, 00:14:48.451 "data_size": 63488 00:14:48.451 }, 00:14:48.451 { 00:14:48.451 "name": null, 00:14:48.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.451 "is_configured": false, 00:14:48.451 "data_offset": 2048, 00:14:48.451 "data_size": 63488 00:14:48.451 }, 00:14:48.451 { 00:14:48.451 "name": "BaseBdev3", 00:14:48.451 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:48.451 "is_configured": true, 00:14:48.451 "data_offset": 2048, 00:14:48.451 "data_size": 63488 00:14:48.451 }, 00:14:48.451 { 00:14:48.451 "name": "BaseBdev4", 00:14:48.451 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:48.451 "is_configured": true, 00:14:48.451 "data_offset": 2048, 00:14:48.451 "data_size": 63488 00:14:48.451 } 00:14:48.451 ] 00:14:48.451 }' 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.451 [2024-12-06 18:12:00.513348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.451 [2024-12-06 18:12:00.515685] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.451 [2024-12-06 18:12:00.515833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.451 [2024-12-06 18:12:00.515886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.451 [2024-12-06 18:12:00.515922] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.451 "name": "raid_bdev1", 00:14:48.451 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:48.451 "strip_size_kb": 0, 00:14:48.451 "state": "online", 00:14:48.451 "raid_level": "raid1", 00:14:48.451 "superblock": true, 00:14:48.451 "num_base_bdevs": 4, 00:14:48.451 "num_base_bdevs_discovered": 2, 00:14:48.451 "num_base_bdevs_operational": 2, 00:14:48.451 "base_bdevs_list": [ 00:14:48.451 { 00:14:48.451 "name": null, 00:14:48.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.451 "is_configured": false, 00:14:48.451 "data_offset": 0, 00:14:48.451 "data_size": 63488 00:14:48.451 }, 00:14:48.451 { 00:14:48.451 "name": null, 00:14:48.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.451 "is_configured": false, 00:14:48.451 "data_offset": 2048, 00:14:48.451 "data_size": 63488 00:14:48.451 }, 00:14:48.451 { 00:14:48.451 "name": "BaseBdev3", 00:14:48.451 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:48.451 "is_configured": true, 00:14:48.451 "data_offset": 2048, 00:14:48.451 "data_size": 63488 00:14:48.451 }, 00:14:48.451 { 00:14:48.451 "name": "BaseBdev4", 00:14:48.451 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:48.451 "is_configured": true, 00:14:48.451 "data_offset": 2048, 00:14:48.451 "data_size": 63488 00:14:48.451 } 00:14:48.451 ] 00:14:48.451 }' 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.451 18:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.017 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.017 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.017 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.017 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.017 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.017 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.017 18:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.017 "name": "raid_bdev1", 00:14:49.017 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:49.017 "strip_size_kb": 0, 00:14:49.017 "state": "online", 00:14:49.017 "raid_level": "raid1", 00:14:49.017 "superblock": true, 00:14:49.017 "num_base_bdevs": 4, 00:14:49.017 "num_base_bdevs_discovered": 2, 00:14:49.017 "num_base_bdevs_operational": 2, 00:14:49.017 "base_bdevs_list": [ 00:14:49.017 { 00:14:49.017 "name": null, 00:14:49.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.017 "is_configured": false, 00:14:49.017 "data_offset": 0, 00:14:49.017 "data_size": 63488 00:14:49.017 }, 00:14:49.017 { 00:14:49.017 "name": null, 00:14:49.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.017 "is_configured": false, 00:14:49.017 "data_offset": 2048, 00:14:49.017 "data_size": 63488 00:14:49.017 }, 00:14:49.017 { 00:14:49.017 "name": "BaseBdev3", 00:14:49.017 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:49.017 "is_configured": true, 00:14:49.017 "data_offset": 2048, 00:14:49.017 "data_size": 63488 00:14:49.017 }, 00:14:49.017 { 00:14:49.017 "name": "BaseBdev4", 00:14:49.017 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:49.017 "is_configured": true, 00:14:49.017 "data_offset": 2048, 00:14:49.017 "data_size": 63488 00:14:49.017 } 00:14:49.017 ] 00:14:49.017 }' 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.017 [2024-12-06 18:12:01.159149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:49.017 [2024-12-06 18:12:01.159222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.017 [2024-12-06 18:12:01.159249] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:49.017 [2024-12-06 18:12:01.159261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.017 [2024-12-06 18:12:01.159797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.017 [2024-12-06 18:12:01.159833] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:49.017 [2024-12-06 18:12:01.159928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:49.017 [2024-12-06 18:12:01.159946] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:49.017 [2024-12-06 18:12:01.159955] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:49.017 [2024-12-06 18:12:01.159971] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:49.017 BaseBdev1 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.017 18:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.393 "name": "raid_bdev1", 00:14:50.393 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:50.393 "strip_size_kb": 0, 00:14:50.393 "state": "online", 00:14:50.393 "raid_level": "raid1", 00:14:50.393 "superblock": true, 00:14:50.393 "num_base_bdevs": 4, 00:14:50.393 "num_base_bdevs_discovered": 2, 00:14:50.393 "num_base_bdevs_operational": 2, 00:14:50.393 "base_bdevs_list": [ 00:14:50.393 { 00:14:50.393 "name": null, 00:14:50.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.393 "is_configured": false, 00:14:50.393 "data_offset": 0, 00:14:50.393 "data_size": 63488 00:14:50.393 }, 00:14:50.393 { 00:14:50.393 "name": null, 00:14:50.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.393 "is_configured": false, 00:14:50.393 "data_offset": 2048, 00:14:50.393 "data_size": 63488 00:14:50.393 }, 00:14:50.393 { 00:14:50.393 "name": "BaseBdev3", 00:14:50.393 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:50.393 "is_configured": true, 00:14:50.393 "data_offset": 2048, 00:14:50.393 "data_size": 63488 00:14:50.393 }, 00:14:50.393 { 00:14:50.393 "name": "BaseBdev4", 00:14:50.393 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:50.393 "is_configured": true, 00:14:50.393 "data_offset": 2048, 00:14:50.393 "data_size": 63488 00:14:50.393 } 00:14:50.393 ] 00:14:50.393 }' 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.393 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.652 "name": "raid_bdev1", 00:14:50.652 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:50.652 "strip_size_kb": 0, 00:14:50.652 "state": "online", 00:14:50.652 "raid_level": "raid1", 00:14:50.652 "superblock": true, 00:14:50.652 "num_base_bdevs": 4, 00:14:50.652 "num_base_bdevs_discovered": 2, 00:14:50.652 "num_base_bdevs_operational": 2, 00:14:50.652 "base_bdevs_list": [ 00:14:50.652 { 00:14:50.652 "name": null, 00:14:50.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.652 "is_configured": false, 00:14:50.652 "data_offset": 0, 00:14:50.652 "data_size": 63488 00:14:50.652 }, 00:14:50.652 { 00:14:50.652 "name": null, 00:14:50.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.652 "is_configured": false, 00:14:50.652 "data_offset": 2048, 00:14:50.652 "data_size": 63488 00:14:50.652 }, 00:14:50.652 { 00:14:50.652 "name": "BaseBdev3", 00:14:50.652 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:50.652 "is_configured": true, 00:14:50.652 "data_offset": 2048, 00:14:50.652 "data_size": 63488 00:14:50.652 }, 00:14:50.652 { 00:14:50.652 "name": "BaseBdev4", 00:14:50.652 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:50.652 "is_configured": true, 00:14:50.652 "data_offset": 2048, 00:14:50.652 "data_size": 63488 00:14:50.652 } 00:14:50.652 ] 00:14:50.652 }' 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.652 [2024-12-06 18:12:02.788457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.652 [2024-12-06 18:12:02.788725] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:50.652 [2024-12-06 18:12:02.788801] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:50.652 request: 00:14:50.652 { 00:14:50.652 "base_bdev": "BaseBdev1", 00:14:50.652 "raid_bdev": "raid_bdev1", 00:14:50.652 "method": "bdev_raid_add_base_bdev", 00:14:50.652 "req_id": 1 00:14:50.652 } 00:14:50.652 Got JSON-RPC error response 00:14:50.652 response: 00:14:50.652 { 00:14:50.652 "code": -22, 00:14:50.652 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:50.652 } 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:50.652 18:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.031 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.031 "name": "raid_bdev1", 00:14:52.031 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:52.031 "strip_size_kb": 0, 00:14:52.031 "state": "online", 00:14:52.031 "raid_level": "raid1", 00:14:52.031 "superblock": true, 00:14:52.031 "num_base_bdevs": 4, 00:14:52.031 "num_base_bdevs_discovered": 2, 00:14:52.031 "num_base_bdevs_operational": 2, 00:14:52.031 "base_bdevs_list": [ 00:14:52.031 { 00:14:52.031 "name": null, 00:14:52.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.031 "is_configured": false, 00:14:52.031 "data_offset": 0, 00:14:52.031 "data_size": 63488 00:14:52.031 }, 00:14:52.031 { 00:14:52.031 "name": null, 00:14:52.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.031 "is_configured": false, 00:14:52.032 "data_offset": 2048, 00:14:52.032 "data_size": 63488 00:14:52.032 }, 00:14:52.032 { 00:14:52.032 "name": "BaseBdev3", 00:14:52.032 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:52.032 "is_configured": true, 00:14:52.032 "data_offset": 2048, 00:14:52.032 "data_size": 63488 00:14:52.032 }, 00:14:52.032 { 00:14:52.032 "name": "BaseBdev4", 00:14:52.032 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:52.032 "is_configured": true, 00:14:52.032 "data_offset": 2048, 00:14:52.032 "data_size": 63488 00:14:52.032 } 00:14:52.032 ] 00:14:52.032 }' 00:14:52.032 18:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.032 18:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.290 "name": "raid_bdev1", 00:14:52.290 "uuid": "9c67a451-dd95-472e-8b82-61a17460fdf3", 00:14:52.290 "strip_size_kb": 0, 00:14:52.290 "state": "online", 00:14:52.290 "raid_level": "raid1", 00:14:52.290 "superblock": true, 00:14:52.290 "num_base_bdevs": 4, 00:14:52.290 "num_base_bdevs_discovered": 2, 00:14:52.290 "num_base_bdevs_operational": 2, 00:14:52.290 "base_bdevs_list": [ 00:14:52.290 { 00:14:52.290 "name": null, 00:14:52.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.290 "is_configured": false, 00:14:52.290 "data_offset": 0, 00:14:52.290 "data_size": 63488 00:14:52.290 }, 00:14:52.290 { 00:14:52.290 "name": null, 00:14:52.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.290 "is_configured": false, 00:14:52.290 "data_offset": 2048, 00:14:52.290 "data_size": 63488 00:14:52.290 }, 00:14:52.290 { 00:14:52.290 "name": "BaseBdev3", 00:14:52.290 "uuid": "2d28c7b0-c3c3-594c-bd8f-386b2f9f8cc9", 00:14:52.290 "is_configured": true, 00:14:52.290 "data_offset": 2048, 00:14:52.290 "data_size": 63488 00:14:52.290 }, 00:14:52.290 { 00:14:52.290 "name": "BaseBdev4", 00:14:52.290 "uuid": "9ddf4269-ebcf-5ea5-818f-178932d83c6b", 00:14:52.290 "is_configured": true, 00:14:52.290 "data_offset": 2048, 00:14:52.290 "data_size": 63488 00:14:52.290 } 00:14:52.290 ] 00:14:52.290 }' 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78523 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78523 ']' 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78523 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.290 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78523 00:14:52.550 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.550 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.550 killing process with pid 78523 00:14:52.550 Received shutdown signal, test time was about 60.000000 seconds 00:14:52.550 00:14:52.550 Latency(us) 00:14:52.550 [2024-12-06T18:12:04.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.550 [2024-12-06T18:12:04.718Z] =================================================================================================================== 00:14:52.550 [2024-12-06T18:12:04.718Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:52.550 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78523' 00:14:52.550 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78523 00:14:52.550 18:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78523 00:14:52.550 [2024-12-06 18:12:04.457037] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.550 [2024-12-06 18:12:04.457182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.550 [2024-12-06 18:12:04.457267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.550 [2024-12-06 18:12:04.457278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:52.810 [2024-12-06 18:12:04.960916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:54.224 00:14:54.224 real 0m26.287s 00:14:54.224 user 0m31.657s 00:14:54.224 sys 0m3.983s 00:14:54.224 ************************************ 00:14:54.224 END TEST raid_rebuild_test_sb 00:14:54.224 ************************************ 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.224 18:12:06 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:54.224 18:12:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:54.224 18:12:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.224 18:12:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.224 ************************************ 00:14:54.224 START TEST raid_rebuild_test_io 00:14:54.224 ************************************ 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79289 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79289 00:14:54.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79289 ']' 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.224 18:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.224 [2024-12-06 18:12:06.297879] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:14:54.224 [2024-12-06 18:12:06.298099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:54.224 Zero copy mechanism will not be used. 00:14:54.224 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79289 ] 00:14:54.483 [2024-12-06 18:12:06.473146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.483 [2024-12-06 18:12:06.583795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.742 [2024-12-06 18:12:06.789747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.742 [2024-12-06 18:12:06.789856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.001 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.001 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:55.001 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.002 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:55.002 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.002 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.262 BaseBdev1_malloc 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.262 [2024-12-06 18:12:07.193465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:55.262 [2024-12-06 18:12:07.193528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.262 [2024-12-06 18:12:07.193552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:55.262 [2024-12-06 18:12:07.193563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.262 [2024-12-06 18:12:07.195944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.262 [2024-12-06 18:12:07.195988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:55.262 BaseBdev1 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.262 BaseBdev2_malloc 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.262 [2024-12-06 18:12:07.249185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:55.262 [2024-12-06 18:12:07.249262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.262 [2024-12-06 18:12:07.249285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:55.262 [2024-12-06 18:12:07.249296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.262 [2024-12-06 18:12:07.251390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.262 [2024-12-06 18:12:07.251428] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:55.262 BaseBdev2 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.262 BaseBdev3_malloc 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.262 [2024-12-06 18:12:07.317924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:55.262 [2024-12-06 18:12:07.318021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.262 [2024-12-06 18:12:07.318087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:55.262 [2024-12-06 18:12:07.318125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.262 [2024-12-06 18:12:07.320298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.262 [2024-12-06 18:12:07.320378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:55.262 BaseBdev3 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.262 BaseBdev4_malloc 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.262 [2024-12-06 18:12:07.374022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:55.262 [2024-12-06 18:12:07.374138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.262 [2024-12-06 18:12:07.374180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:55.262 [2024-12-06 18:12:07.374217] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.262 [2024-12-06 18:12:07.376439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.262 [2024-12-06 18:12:07.376519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:55.262 BaseBdev4 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.262 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:55.263 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.263 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.263 spare_malloc 00:14:55.263 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.263 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:55.263 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.263 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.523 spare_delay 00:14:55.523 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.523 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:55.523 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.523 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.523 [2024-12-06 18:12:07.442476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:55.523 [2024-12-06 18:12:07.442582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.523 [2024-12-06 18:12:07.442633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:55.523 [2024-12-06 18:12:07.442683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.523 [2024-12-06 18:12:07.444976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.523 [2024-12-06 18:12:07.445065] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:55.523 spare 00:14:55.523 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.523 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:55.523 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.523 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.523 [2024-12-06 18:12:07.454504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.523 [2024-12-06 18:12:07.456523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.523 [2024-12-06 18:12:07.456637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.523 [2024-12-06 18:12:07.456723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:55.523 [2024-12-06 18:12:07.456833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:55.523 [2024-12-06 18:12:07.456893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:55.523 [2024-12-06 18:12:07.457215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:55.524 [2024-12-06 18:12:07.457413] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:55.524 [2024-12-06 18:12:07.457429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:55.524 [2024-12-06 18:12:07.457612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.524 "name": "raid_bdev1", 00:14:55.524 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:14:55.524 "strip_size_kb": 0, 00:14:55.524 "state": "online", 00:14:55.524 "raid_level": "raid1", 00:14:55.524 "superblock": false, 00:14:55.524 "num_base_bdevs": 4, 00:14:55.524 "num_base_bdevs_discovered": 4, 00:14:55.524 "num_base_bdevs_operational": 4, 00:14:55.524 "base_bdevs_list": [ 00:14:55.524 { 00:14:55.524 "name": "BaseBdev1", 00:14:55.524 "uuid": "80e259aa-05cf-54dc-bc42-850ee008dd0f", 00:14:55.524 "is_configured": true, 00:14:55.524 "data_offset": 0, 00:14:55.524 "data_size": 65536 00:14:55.524 }, 00:14:55.524 { 00:14:55.524 "name": "BaseBdev2", 00:14:55.524 "uuid": "884665bd-2039-5fb2-8938-31a70a03267f", 00:14:55.524 "is_configured": true, 00:14:55.524 "data_offset": 0, 00:14:55.524 "data_size": 65536 00:14:55.524 }, 00:14:55.524 { 00:14:55.524 "name": "BaseBdev3", 00:14:55.524 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:14:55.524 "is_configured": true, 00:14:55.524 "data_offset": 0, 00:14:55.524 "data_size": 65536 00:14:55.524 }, 00:14:55.524 { 00:14:55.524 "name": "BaseBdev4", 00:14:55.524 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:14:55.524 "is_configured": true, 00:14:55.524 "data_offset": 0, 00:14:55.524 "data_size": 65536 00:14:55.524 } 00:14:55.524 ] 00:14:55.524 }' 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.524 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.784 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.784 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.784 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.784 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:55.784 [2024-12-06 18:12:07.926054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.784 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.045 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:56.045 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.045 18:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:56.045 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.045 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.045 18:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.045 [2024-12-06 18:12:08.029516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.045 "name": "raid_bdev1", 00:14:56.045 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:14:56.045 "strip_size_kb": 0, 00:14:56.045 "state": "online", 00:14:56.045 "raid_level": "raid1", 00:14:56.045 "superblock": false, 00:14:56.045 "num_base_bdevs": 4, 00:14:56.045 "num_base_bdevs_discovered": 3, 00:14:56.045 "num_base_bdevs_operational": 3, 00:14:56.045 "base_bdevs_list": [ 00:14:56.045 { 00:14:56.045 "name": null, 00:14:56.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.045 "is_configured": false, 00:14:56.045 "data_offset": 0, 00:14:56.045 "data_size": 65536 00:14:56.045 }, 00:14:56.045 { 00:14:56.045 "name": "BaseBdev2", 00:14:56.045 "uuid": "884665bd-2039-5fb2-8938-31a70a03267f", 00:14:56.045 "is_configured": true, 00:14:56.045 "data_offset": 0, 00:14:56.045 "data_size": 65536 00:14:56.045 }, 00:14:56.045 { 00:14:56.045 "name": "BaseBdev3", 00:14:56.045 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:14:56.045 "is_configured": true, 00:14:56.045 "data_offset": 0, 00:14:56.045 "data_size": 65536 00:14:56.045 }, 00:14:56.045 { 00:14:56.045 "name": "BaseBdev4", 00:14:56.045 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:14:56.045 "is_configured": true, 00:14:56.045 "data_offset": 0, 00:14:56.045 "data_size": 65536 00:14:56.045 } 00:14:56.045 ] 00:14:56.045 }' 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.045 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.045 [2024-12-06 18:12:08.138223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:56.045 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:56.045 Zero copy mechanism will not be used. 00:14:56.045 Running I/O for 60 seconds... 00:14:56.615 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:56.615 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.615 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.615 [2024-12-06 18:12:08.498982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:56.615 18:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.615 18:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:56.615 [2024-12-06 18:12:08.554809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:56.615 [2024-12-06 18:12:08.557148] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.615 [2024-12-06 18:12:08.667439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:56.615 [2024-12-06 18:12:08.669093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:56.874 [2024-12-06 18:12:08.889300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:56.874 [2024-12-06 18:12:08.889732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:57.133 159.00 IOPS, 477.00 MiB/s [2024-12-06T18:12:09.301Z] [2024-12-06 18:12:09.155354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:57.393 [2024-12-06 18:12:09.379206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:57.393 [2024-12-06 18:12:09.379697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:57.393 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.393 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.393 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.393 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.393 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.393 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.393 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.393 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.393 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.653 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.653 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.653 "name": "raid_bdev1", 00:14:57.653 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:14:57.653 "strip_size_kb": 0, 00:14:57.653 "state": "online", 00:14:57.653 "raid_level": "raid1", 00:14:57.653 "superblock": false, 00:14:57.653 "num_base_bdevs": 4, 00:14:57.653 "num_base_bdevs_discovered": 4, 00:14:57.653 "num_base_bdevs_operational": 4, 00:14:57.653 "process": { 00:14:57.653 "type": "rebuild", 00:14:57.653 "target": "spare", 00:14:57.653 "progress": { 00:14:57.653 "blocks": 10240, 00:14:57.653 "percent": 15 00:14:57.653 } 00:14:57.653 }, 00:14:57.653 "base_bdevs_list": [ 00:14:57.653 { 00:14:57.653 "name": "spare", 00:14:57.653 "uuid": "f8d88db7-9bbf-5bc9-8ecb-aa0950ec6c99", 00:14:57.653 "is_configured": true, 00:14:57.653 "data_offset": 0, 00:14:57.653 "data_size": 65536 00:14:57.653 }, 00:14:57.653 { 00:14:57.653 "name": "BaseBdev2", 00:14:57.653 "uuid": "884665bd-2039-5fb2-8938-31a70a03267f", 00:14:57.653 "is_configured": true, 00:14:57.653 "data_offset": 0, 00:14:57.653 "data_size": 65536 00:14:57.653 }, 00:14:57.653 { 00:14:57.653 "name": "BaseBdev3", 00:14:57.653 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:14:57.653 "is_configured": true, 00:14:57.653 "data_offset": 0, 00:14:57.653 "data_size": 65536 00:14:57.653 }, 00:14:57.653 { 00:14:57.653 "name": "BaseBdev4", 00:14:57.653 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:14:57.653 "is_configured": true, 00:14:57.653 "data_offset": 0, 00:14:57.653 "data_size": 65536 00:14:57.653 } 00:14:57.653 ] 00:14:57.653 }' 00:14:57.653 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.653 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.653 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.653 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.653 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:57.653 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.653 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.653 [2024-12-06 18:12:09.674808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:57.913 [2024-12-06 18:12:09.845254] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:57.913 [2024-12-06 18:12:09.850419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.913 [2024-12-06 18:12:09.850474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:57.913 [2024-12-06 18:12:09.850488] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:57.913 [2024-12-06 18:12:09.874349] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.913 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.913 "name": "raid_bdev1", 00:14:57.913 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:14:57.913 "strip_size_kb": 0, 00:14:57.913 "state": "online", 00:14:57.913 "raid_level": "raid1", 00:14:57.913 "superblock": false, 00:14:57.913 "num_base_bdevs": 4, 00:14:57.914 "num_base_bdevs_discovered": 3, 00:14:57.914 "num_base_bdevs_operational": 3, 00:14:57.914 "base_bdevs_list": [ 00:14:57.914 { 00:14:57.914 "name": null, 00:14:57.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.914 "is_configured": false, 00:14:57.914 "data_offset": 0, 00:14:57.914 "data_size": 65536 00:14:57.914 }, 00:14:57.914 { 00:14:57.914 "name": "BaseBdev2", 00:14:57.914 "uuid": "884665bd-2039-5fb2-8938-31a70a03267f", 00:14:57.914 "is_configured": true, 00:14:57.914 "data_offset": 0, 00:14:57.914 "data_size": 65536 00:14:57.914 }, 00:14:57.914 { 00:14:57.914 "name": "BaseBdev3", 00:14:57.914 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:14:57.914 "is_configured": true, 00:14:57.914 "data_offset": 0, 00:14:57.914 "data_size": 65536 00:14:57.914 }, 00:14:57.914 { 00:14:57.914 "name": "BaseBdev4", 00:14:57.914 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:14:57.914 "is_configured": true, 00:14:57.914 "data_offset": 0, 00:14:57.914 "data_size": 65536 00:14:57.914 } 00:14:57.914 ] 00:14:57.914 }' 00:14:57.914 18:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.914 18:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.433 140.00 IOPS, 420.00 MiB/s [2024-12-06T18:12:10.601Z] 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.433 "name": "raid_bdev1", 00:14:58.433 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:14:58.433 "strip_size_kb": 0, 00:14:58.433 "state": "online", 00:14:58.433 "raid_level": "raid1", 00:14:58.433 "superblock": false, 00:14:58.433 "num_base_bdevs": 4, 00:14:58.433 "num_base_bdevs_discovered": 3, 00:14:58.433 "num_base_bdevs_operational": 3, 00:14:58.433 "base_bdevs_list": [ 00:14:58.433 { 00:14:58.433 "name": null, 00:14:58.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.433 "is_configured": false, 00:14:58.433 "data_offset": 0, 00:14:58.433 "data_size": 65536 00:14:58.433 }, 00:14:58.433 { 00:14:58.433 "name": "BaseBdev2", 00:14:58.433 "uuid": "884665bd-2039-5fb2-8938-31a70a03267f", 00:14:58.433 "is_configured": true, 00:14:58.433 "data_offset": 0, 00:14:58.433 "data_size": 65536 00:14:58.433 }, 00:14:58.433 { 00:14:58.433 "name": "BaseBdev3", 00:14:58.433 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:14:58.433 "is_configured": true, 00:14:58.433 "data_offset": 0, 00:14:58.433 "data_size": 65536 00:14:58.433 }, 00:14:58.433 { 00:14:58.433 "name": "BaseBdev4", 00:14:58.433 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:14:58.433 "is_configured": true, 00:14:58.433 "data_offset": 0, 00:14:58.433 "data_size": 65536 00:14:58.433 } 00:14:58.433 ] 00:14:58.433 }' 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.433 [2024-12-06 18:12:10.512877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.433 18:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:58.433 [2024-12-06 18:12:10.569339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:58.433 [2024-12-06 18:12:10.571349] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:58.695 [2024-12-06 18:12:10.683657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:58.695 [2024-12-06 18:12:10.689777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:58.955 [2024-12-06 18:12:10.901683] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:58.955 [2024-12-06 18:12:10.902550] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:59.473 139.00 IOPS, 417.00 MiB/s [2024-12-06T18:12:11.641Z] [2024-12-06 18:12:11.399877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.473 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.473 "name": "raid_bdev1", 00:14:59.473 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:14:59.473 "strip_size_kb": 0, 00:14:59.473 "state": "online", 00:14:59.473 "raid_level": "raid1", 00:14:59.473 "superblock": false, 00:14:59.473 "num_base_bdevs": 4, 00:14:59.473 "num_base_bdevs_discovered": 4, 00:14:59.473 "num_base_bdevs_operational": 4, 00:14:59.473 "process": { 00:14:59.473 "type": "rebuild", 00:14:59.473 "target": "spare", 00:14:59.473 "progress": { 00:14:59.473 "blocks": 12288, 00:14:59.473 "percent": 18 00:14:59.473 } 00:14:59.473 }, 00:14:59.473 "base_bdevs_list": [ 00:14:59.473 { 00:14:59.473 "name": "spare", 00:14:59.473 "uuid": "f8d88db7-9bbf-5bc9-8ecb-aa0950ec6c99", 00:14:59.473 "is_configured": true, 00:14:59.473 "data_offset": 0, 00:14:59.473 "data_size": 65536 00:14:59.473 }, 00:14:59.474 { 00:14:59.474 "name": "BaseBdev2", 00:14:59.474 "uuid": "884665bd-2039-5fb2-8938-31a70a03267f", 00:14:59.474 "is_configured": true, 00:14:59.474 "data_offset": 0, 00:14:59.474 "data_size": 65536 00:14:59.474 }, 00:14:59.474 { 00:14:59.474 "name": "BaseBdev3", 00:14:59.474 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:14:59.474 "is_configured": true, 00:14:59.474 "data_offset": 0, 00:14:59.474 "data_size": 65536 00:14:59.474 }, 00:14:59.474 { 00:14:59.474 "name": "BaseBdev4", 00:14:59.474 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:14:59.474 "is_configured": true, 00:14:59.474 "data_offset": 0, 00:14:59.474 "data_size": 65536 00:14:59.474 } 00:14:59.474 ] 00:14:59.474 }' 00:14:59.474 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.474 [2024-12-06 18:12:11.636138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:59.474 [2024-12-06 18:12:11.636849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:59.733 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.733 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.733 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.733 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:59.733 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:59.733 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.734 [2024-12-06 18:12:11.714651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.734 [2024-12-06 18:12:11.757804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:59.734 [2024-12-06 18:12:11.814333] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:59.734 [2024-12-06 18:12:11.814469] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:59.734 [2024-12-06 18:12:11.816175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.734 "name": "raid_bdev1", 00:14:59.734 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:14:59.734 "strip_size_kb": 0, 00:14:59.734 "state": "online", 00:14:59.734 "raid_level": "raid1", 00:14:59.734 "superblock": false, 00:14:59.734 "num_base_bdevs": 4, 00:14:59.734 "num_base_bdevs_discovered": 3, 00:14:59.734 "num_base_bdevs_operational": 3, 00:14:59.734 "process": { 00:14:59.734 "type": "rebuild", 00:14:59.734 "target": "spare", 00:14:59.734 "progress": { 00:14:59.734 "blocks": 16384, 00:14:59.734 "percent": 25 00:14:59.734 } 00:14:59.734 }, 00:14:59.734 "base_bdevs_list": [ 00:14:59.734 { 00:14:59.734 "name": "spare", 00:14:59.734 "uuid": "f8d88db7-9bbf-5bc9-8ecb-aa0950ec6c99", 00:14:59.734 "is_configured": true, 00:14:59.734 "data_offset": 0, 00:14:59.734 "data_size": 65536 00:14:59.734 }, 00:14:59.734 { 00:14:59.734 "name": null, 00:14:59.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.734 "is_configured": false, 00:14:59.734 "data_offset": 0, 00:14:59.734 "data_size": 65536 00:14:59.734 }, 00:14:59.734 { 00:14:59.734 "name": "BaseBdev3", 00:14:59.734 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:14:59.734 "is_configured": true, 00:14:59.734 "data_offset": 0, 00:14:59.734 "data_size": 65536 00:14:59.734 }, 00:14:59.734 { 00:14:59.734 "name": "BaseBdev4", 00:14:59.734 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:14:59.734 "is_configured": true, 00:14:59.734 "data_offset": 0, 00:14:59.734 "data_size": 65536 00:14:59.734 } 00:14:59.734 ] 00:14:59.734 }' 00:14:59.734 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.993 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.993 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.993 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=505 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.994 18:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.994 18:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.994 "name": "raid_bdev1", 00:14:59.994 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:14:59.994 "strip_size_kb": 0, 00:14:59.994 "state": "online", 00:14:59.994 "raid_level": "raid1", 00:14:59.994 "superblock": false, 00:14:59.994 "num_base_bdevs": 4, 00:14:59.994 "num_base_bdevs_discovered": 3, 00:14:59.994 "num_base_bdevs_operational": 3, 00:14:59.994 "process": { 00:14:59.994 "type": "rebuild", 00:14:59.994 "target": "spare", 00:14:59.994 "progress": { 00:14:59.994 "blocks": 18432, 00:14:59.994 "percent": 28 00:14:59.994 } 00:14:59.994 }, 00:14:59.994 "base_bdevs_list": [ 00:14:59.994 { 00:14:59.994 "name": "spare", 00:14:59.994 "uuid": "f8d88db7-9bbf-5bc9-8ecb-aa0950ec6c99", 00:14:59.994 "is_configured": true, 00:14:59.994 "data_offset": 0, 00:14:59.994 "data_size": 65536 00:14:59.994 }, 00:14:59.994 { 00:14:59.994 "name": null, 00:14:59.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.994 "is_configured": false, 00:14:59.994 "data_offset": 0, 00:14:59.994 "data_size": 65536 00:14:59.994 }, 00:14:59.994 { 00:14:59.994 "name": "BaseBdev3", 00:14:59.994 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:14:59.994 "is_configured": true, 00:14:59.994 "data_offset": 0, 00:14:59.994 "data_size": 65536 00:14:59.994 }, 00:14:59.994 { 00:14:59.994 "name": "BaseBdev4", 00:14:59.994 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:14:59.994 "is_configured": true, 00:14:59.994 "data_offset": 0, 00:14:59.994 "data_size": 65536 00:14:59.994 } 00:14:59.994 ] 00:14:59.994 }' 00:14:59.994 18:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.994 18:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.994 18:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.994 18:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.994 18:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.254 120.50 IOPS, 361.50 MiB/s [2024-12-06T18:12:12.422Z] [2024-12-06 18:12:12.186796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:00.513 [2024-12-06 18:12:12.635420] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:01.081 [2024-12-06 18:12:13.087681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.081 105.80 IOPS, 317.40 MiB/s [2024-12-06T18:12:13.249Z] 18:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.081 "name": "raid_bdev1", 00:15:01.081 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:15:01.081 "strip_size_kb": 0, 00:15:01.081 "state": "online", 00:15:01.081 "raid_level": "raid1", 00:15:01.081 "superblock": false, 00:15:01.081 "num_base_bdevs": 4, 00:15:01.081 "num_base_bdevs_discovered": 3, 00:15:01.081 "num_base_bdevs_operational": 3, 00:15:01.081 "process": { 00:15:01.081 "type": "rebuild", 00:15:01.081 "target": "spare", 00:15:01.081 "progress": { 00:15:01.081 "blocks": 34816, 00:15:01.081 "percent": 53 00:15:01.081 } 00:15:01.081 }, 00:15:01.081 "base_bdevs_list": [ 00:15:01.081 { 00:15:01.081 "name": "spare", 00:15:01.081 "uuid": "f8d88db7-9bbf-5bc9-8ecb-aa0950ec6c99", 00:15:01.081 "is_configured": true, 00:15:01.081 "data_offset": 0, 00:15:01.081 "data_size": 65536 00:15:01.081 }, 00:15:01.081 { 00:15:01.081 "name": null, 00:15:01.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.081 "is_configured": false, 00:15:01.081 "data_offset": 0, 00:15:01.081 "data_size": 65536 00:15:01.081 }, 00:15:01.081 { 00:15:01.081 "name": "BaseBdev3", 00:15:01.081 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:15:01.081 "is_configured": true, 00:15:01.081 "data_offset": 0, 00:15:01.081 "data_size": 65536 00:15:01.081 }, 00:15:01.081 { 00:15:01.081 "name": "BaseBdev4", 00:15:01.081 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:15:01.081 "is_configured": true, 00:15:01.081 "data_offset": 0, 00:15:01.081 "data_size": 65536 00:15:01.081 } 00:15:01.081 ] 00:15:01.081 }' 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.081 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.340 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.340 18:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.340 [2024-12-06 18:12:13.313445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:01.599 [2024-12-06 18:12:13.536885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:01.599 [2024-12-06 18:12:13.537256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:01.858 [2024-12-06 18:12:13.787310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:01.858 [2024-12-06 18:12:14.019268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:02.377 94.17 IOPS, 282.50 MiB/s [2024-12-06T18:12:14.545Z] 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.377 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.377 "name": "raid_bdev1", 00:15:02.377 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:15:02.377 "strip_size_kb": 0, 00:15:02.377 "state": "online", 00:15:02.377 "raid_level": "raid1", 00:15:02.377 "superblock": false, 00:15:02.377 "num_base_bdevs": 4, 00:15:02.377 "num_base_bdevs_discovered": 3, 00:15:02.377 "num_base_bdevs_operational": 3, 00:15:02.377 "process": { 00:15:02.377 "type": "rebuild", 00:15:02.377 "target": "spare", 00:15:02.377 "progress": { 00:15:02.377 "blocks": 49152, 00:15:02.377 "percent": 75 00:15:02.377 } 00:15:02.377 }, 00:15:02.377 "base_bdevs_list": [ 00:15:02.377 { 00:15:02.377 "name": "spare", 00:15:02.377 "uuid": "f8d88db7-9bbf-5bc9-8ecb-aa0950ec6c99", 00:15:02.377 "is_configured": true, 00:15:02.377 "data_offset": 0, 00:15:02.377 "data_size": 65536 00:15:02.377 }, 00:15:02.377 { 00:15:02.377 "name": null, 00:15:02.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.377 "is_configured": false, 00:15:02.377 "data_offset": 0, 00:15:02.377 "data_size": 65536 00:15:02.377 }, 00:15:02.377 { 00:15:02.377 "name": "BaseBdev3", 00:15:02.378 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:15:02.378 "is_configured": true, 00:15:02.378 "data_offset": 0, 00:15:02.378 "data_size": 65536 00:15:02.378 }, 00:15:02.378 { 00:15:02.378 "name": "BaseBdev4", 00:15:02.378 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:15:02.378 "is_configured": true, 00:15:02.378 "data_offset": 0, 00:15:02.378 "data_size": 65536 00:15:02.378 } 00:15:02.378 ] 00:15:02.378 }' 00:15:02.378 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.378 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.378 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.378 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.378 18:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.635 [2024-12-06 18:12:14.687747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:02.894 [2024-12-06 18:12:14.911407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:03.153 86.29 IOPS, 258.86 MiB/s [2024-12-06T18:12:15.321Z] [2024-12-06 18:12:15.244588] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:03.413 [2024-12-06 18:12:15.344384] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:03.413 [2024-12-06 18:12:15.346683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.413 "name": "raid_bdev1", 00:15:03.413 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:15:03.413 "strip_size_kb": 0, 00:15:03.413 "state": "online", 00:15:03.413 "raid_level": "raid1", 00:15:03.413 "superblock": false, 00:15:03.413 "num_base_bdevs": 4, 00:15:03.413 "num_base_bdevs_discovered": 3, 00:15:03.413 "num_base_bdevs_operational": 3, 00:15:03.413 "base_bdevs_list": [ 00:15:03.413 { 00:15:03.413 "name": "spare", 00:15:03.413 "uuid": "f8d88db7-9bbf-5bc9-8ecb-aa0950ec6c99", 00:15:03.413 "is_configured": true, 00:15:03.413 "data_offset": 0, 00:15:03.413 "data_size": 65536 00:15:03.413 }, 00:15:03.413 { 00:15:03.413 "name": null, 00:15:03.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.413 "is_configured": false, 00:15:03.413 "data_offset": 0, 00:15:03.413 "data_size": 65536 00:15:03.413 }, 00:15:03.413 { 00:15:03.413 "name": "BaseBdev3", 00:15:03.413 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:15:03.413 "is_configured": true, 00:15:03.413 "data_offset": 0, 00:15:03.413 "data_size": 65536 00:15:03.413 }, 00:15:03.413 { 00:15:03.413 "name": "BaseBdev4", 00:15:03.413 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:15:03.413 "is_configured": true, 00:15:03.413 "data_offset": 0, 00:15:03.413 "data_size": 65536 00:15:03.413 } 00:15:03.413 ] 00:15:03.413 }' 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.413 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.414 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.414 "name": "raid_bdev1", 00:15:03.414 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:15:03.414 "strip_size_kb": 0, 00:15:03.414 "state": "online", 00:15:03.414 "raid_level": "raid1", 00:15:03.414 "superblock": false, 00:15:03.414 "num_base_bdevs": 4, 00:15:03.414 "num_base_bdevs_discovered": 3, 00:15:03.414 "num_base_bdevs_operational": 3, 00:15:03.414 "base_bdevs_list": [ 00:15:03.414 { 00:15:03.414 "name": "spare", 00:15:03.414 "uuid": "f8d88db7-9bbf-5bc9-8ecb-aa0950ec6c99", 00:15:03.414 "is_configured": true, 00:15:03.414 "data_offset": 0, 00:15:03.414 "data_size": 65536 00:15:03.414 }, 00:15:03.414 { 00:15:03.414 "name": null, 00:15:03.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.414 "is_configured": false, 00:15:03.414 "data_offset": 0, 00:15:03.414 "data_size": 65536 00:15:03.414 }, 00:15:03.414 { 00:15:03.414 "name": "BaseBdev3", 00:15:03.414 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:15:03.414 "is_configured": true, 00:15:03.414 "data_offset": 0, 00:15:03.414 "data_size": 65536 00:15:03.414 }, 00:15:03.414 { 00:15:03.414 "name": "BaseBdev4", 00:15:03.414 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:15:03.414 "is_configured": true, 00:15:03.414 "data_offset": 0, 00:15:03.414 "data_size": 65536 00:15:03.414 } 00:15:03.414 ] 00:15:03.414 }' 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.673 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.673 "name": "raid_bdev1", 00:15:03.673 "uuid": "f4b1f528-4d8a-401d-9408-d4a846feedce", 00:15:03.673 "strip_size_kb": 0, 00:15:03.673 "state": "online", 00:15:03.673 "raid_level": "raid1", 00:15:03.673 "superblock": false, 00:15:03.673 "num_base_bdevs": 4, 00:15:03.673 "num_base_bdevs_discovered": 3, 00:15:03.673 "num_base_bdevs_operational": 3, 00:15:03.673 "base_bdevs_list": [ 00:15:03.673 { 00:15:03.673 "name": "spare", 00:15:03.673 "uuid": "f8d88db7-9bbf-5bc9-8ecb-aa0950ec6c99", 00:15:03.673 "is_configured": true, 00:15:03.673 "data_offset": 0, 00:15:03.673 "data_size": 65536 00:15:03.673 }, 00:15:03.673 { 00:15:03.673 "name": null, 00:15:03.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.674 "is_configured": false, 00:15:03.674 "data_offset": 0, 00:15:03.674 "data_size": 65536 00:15:03.674 }, 00:15:03.674 { 00:15:03.674 "name": "BaseBdev3", 00:15:03.674 "uuid": "7f64847c-d58b-5836-a7b6-85a8310493cf", 00:15:03.674 "is_configured": true, 00:15:03.674 "data_offset": 0, 00:15:03.674 "data_size": 65536 00:15:03.674 }, 00:15:03.674 { 00:15:03.674 "name": "BaseBdev4", 00:15:03.674 "uuid": "c6cd6ef3-f1e8-55c7-80ed-54a8d65584e6", 00:15:03.674 "is_configured": true, 00:15:03.674 "data_offset": 0, 00:15:03.674 "data_size": 65536 00:15:03.674 } 00:15:03.674 ] 00:15:03.674 }' 00:15:03.674 18:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.674 18:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.251 [2024-12-06 18:12:16.138855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.251 [2024-12-06 18:12:16.138953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.251 79.75 IOPS, 239.25 MiB/s 00:15:04.251 Latency(us) 00:15:04.251 [2024-12-06T18:12:16.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.251 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:04.251 raid_bdev1 : 8.11 78.88 236.64 0.00 0.00 17851.73 321.96 119052.30 00:15:04.251 [2024-12-06T18:12:16.419Z] =================================================================================================================== 00:15:04.251 [2024-12-06T18:12:16.419Z] Total : 78.88 236.64 0.00 0.00 17851.73 321.96 119052.30 00:15:04.251 [2024-12-06 18:12:16.261418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.251 [2024-12-06 18:12:16.261584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.251 [2024-12-06 18:12:16.261700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.251 [2024-12-06 18:12:16.261748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:04.251 { 00:15:04.251 "results": [ 00:15:04.251 { 00:15:04.251 "job": "raid_bdev1", 00:15:04.251 "core_mask": "0x1", 00:15:04.251 "workload": "randrw", 00:15:04.251 "percentage": 50, 00:15:04.251 "status": "finished", 00:15:04.251 "queue_depth": 2, 00:15:04.251 "io_size": 3145728, 00:15:04.251 "runtime": 8.113659, 00:15:04.251 "iops": 78.87933175402121, 00:15:04.251 "mibps": 236.6379952620636, 00:15:04.251 "io_failed": 0, 00:15:04.251 "io_timeout": 0, 00:15:04.251 "avg_latency_us": 17851.732401746725, 00:15:04.251 "min_latency_us": 321.95633187772927, 00:15:04.251 "max_latency_us": 119052.29694323144 00:15:04.251 } 00:15:04.251 ], 00:15:04.251 "core_count": 1 00:15:04.251 } 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.251 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:04.510 /dev/nbd0 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.510 1+0 records in 00:15:04.510 1+0 records out 00:15:04.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367118 s, 11.2 MB/s 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.510 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:04.769 /dev/nbd1 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.769 1+0 records in 00:15:04.769 1+0 records out 00:15:04.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310161 s, 13.2 MB/s 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.769 18:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:05.028 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:05.028 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.028 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:05.028 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.028 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:05.028 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.028 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:05.314 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.315 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:05.574 /dev/nbd1 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.574 1+0 records in 00:15:05.574 1+0 records out 00:15:05.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504955 s, 8.1 MB/s 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.574 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.834 18:12:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79289 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79289 ']' 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79289 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79289 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79289' 00:15:06.093 killing process with pid 79289 00:15:06.093 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79289 00:15:06.093 Received shutdown signal, test time was about 10.044607 seconds 00:15:06.093 00:15:06.094 Latency(us) 00:15:06.094 [2024-12-06T18:12:18.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.094 [2024-12-06T18:12:18.262Z] =================================================================================================================== 00:15:06.094 [2024-12-06T18:12:18.262Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.094 18:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79289 00:15:06.094 [2024-12-06 18:12:18.165897] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.661 [2024-12-06 18:12:18.604545] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.038 18:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:08.038 00:15:08.038 real 0m13.697s 00:15:08.038 user 0m17.265s 00:15:08.039 sys 0m1.950s 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.039 ************************************ 00:15:08.039 END TEST raid_rebuild_test_io 00:15:08.039 ************************************ 00:15:08.039 18:12:19 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:08.039 18:12:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:08.039 18:12:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.039 18:12:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.039 ************************************ 00:15:08.039 START TEST raid_rebuild_test_sb_io 00:15:08.039 ************************************ 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79698 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79698 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79698 ']' 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.039 18:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:08.039 [2024-12-06 18:12:20.060481] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:15:08.039 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:08.039 Zero copy mechanism will not be used. 00:15:08.039 [2024-12-06 18:12:20.060701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79698 ] 00:15:08.298 [2024-12-06 18:12:20.235221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.298 [2024-12-06 18:12:20.366248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.557 [2024-12-06 18:12:20.584286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.557 [2024-12-06 18:12:20.584328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.842 18:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.842 18:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:08.842 18:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.842 18:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:08.842 18:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.842 18:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.101 BaseBdev1_malloc 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.101 [2024-12-06 18:12:21.016714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:09.101 [2024-12-06 18:12:21.016783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.101 [2024-12-06 18:12:21.016809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:09.101 [2024-12-06 18:12:21.016822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.101 [2024-12-06 18:12:21.019136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.101 [2024-12-06 18:12:21.019178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.101 BaseBdev1 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.101 BaseBdev2_malloc 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.101 [2024-12-06 18:12:21.074112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:09.101 [2024-12-06 18:12:21.074241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.101 [2024-12-06 18:12:21.074287] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:09.101 [2024-12-06 18:12:21.074325] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.101 [2024-12-06 18:12:21.076763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.101 [2024-12-06 18:12:21.076837] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:09.101 BaseBdev2 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.101 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.102 BaseBdev3_malloc 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.102 [2024-12-06 18:12:21.152607] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:09.102 [2024-12-06 18:12:21.152667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.102 [2024-12-06 18:12:21.152693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:09.102 [2024-12-06 18:12:21.152705] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.102 [2024-12-06 18:12:21.155001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.102 [2024-12-06 18:12:21.155102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:09.102 BaseBdev3 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.102 BaseBdev4_malloc 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.102 [2024-12-06 18:12:21.212133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:09.102 [2024-12-06 18:12:21.212259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.102 [2024-12-06 18:12:21.212287] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:09.102 [2024-12-06 18:12:21.212300] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.102 [2024-12-06 18:12:21.214451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.102 [2024-12-06 18:12:21.214488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:09.102 BaseBdev4 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.102 spare_malloc 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.102 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.361 spare_delay 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.362 [2024-12-06 18:12:21.279539] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:09.362 [2024-12-06 18:12:21.279667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.362 [2024-12-06 18:12:21.279709] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:09.362 [2024-12-06 18:12:21.279766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.362 [2024-12-06 18:12:21.282106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.362 [2024-12-06 18:12:21.282181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:09.362 spare 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.362 [2024-12-06 18:12:21.291574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.362 [2024-12-06 18:12:21.293568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.362 [2024-12-06 18:12:21.293676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.362 [2024-12-06 18:12:21.293764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:09.362 [2024-12-06 18:12:21.293986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:09.362 [2024-12-06 18:12:21.294035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:09.362 [2024-12-06 18:12:21.294337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:09.362 [2024-12-06 18:12:21.294553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:09.362 [2024-12-06 18:12:21.294596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:09.362 [2024-12-06 18:12:21.294826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.362 "name": "raid_bdev1", 00:15:09.362 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:09.362 "strip_size_kb": 0, 00:15:09.362 "state": "online", 00:15:09.362 "raid_level": "raid1", 00:15:09.362 "superblock": true, 00:15:09.362 "num_base_bdevs": 4, 00:15:09.362 "num_base_bdevs_discovered": 4, 00:15:09.362 "num_base_bdevs_operational": 4, 00:15:09.362 "base_bdevs_list": [ 00:15:09.362 { 00:15:09.362 "name": "BaseBdev1", 00:15:09.362 "uuid": "18d8b0ef-d742-5dde-8a11-d624bdfc9152", 00:15:09.362 "is_configured": true, 00:15:09.362 "data_offset": 2048, 00:15:09.362 "data_size": 63488 00:15:09.362 }, 00:15:09.362 { 00:15:09.362 "name": "BaseBdev2", 00:15:09.362 "uuid": "dbf53e57-215b-594e-98db-cdd1a6ebdb64", 00:15:09.362 "is_configured": true, 00:15:09.362 "data_offset": 2048, 00:15:09.362 "data_size": 63488 00:15:09.362 }, 00:15:09.362 { 00:15:09.362 "name": "BaseBdev3", 00:15:09.362 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:09.362 "is_configured": true, 00:15:09.362 "data_offset": 2048, 00:15:09.362 "data_size": 63488 00:15:09.362 }, 00:15:09.362 { 00:15:09.362 "name": "BaseBdev4", 00:15:09.362 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:09.362 "is_configured": true, 00:15:09.362 "data_offset": 2048, 00:15:09.362 "data_size": 63488 00:15:09.362 } 00:15:09.362 ] 00:15:09.362 }' 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.362 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:09.621 [2024-12-06 18:12:21.739240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:09.621 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.880 [2024-12-06 18:12:21.826670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.880 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.880 "name": "raid_bdev1", 00:15:09.880 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:09.880 "strip_size_kb": 0, 00:15:09.880 "state": "online", 00:15:09.880 "raid_level": "raid1", 00:15:09.880 "superblock": true, 00:15:09.880 "num_base_bdevs": 4, 00:15:09.880 "num_base_bdevs_discovered": 3, 00:15:09.880 "num_base_bdevs_operational": 3, 00:15:09.880 "base_bdevs_list": [ 00:15:09.880 { 00:15:09.880 "name": null, 00:15:09.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.880 "is_configured": false, 00:15:09.881 "data_offset": 0, 00:15:09.881 "data_size": 63488 00:15:09.881 }, 00:15:09.881 { 00:15:09.881 "name": "BaseBdev2", 00:15:09.881 "uuid": "dbf53e57-215b-594e-98db-cdd1a6ebdb64", 00:15:09.881 "is_configured": true, 00:15:09.881 "data_offset": 2048, 00:15:09.881 "data_size": 63488 00:15:09.881 }, 00:15:09.881 { 00:15:09.881 "name": "BaseBdev3", 00:15:09.881 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:09.881 "is_configured": true, 00:15:09.881 "data_offset": 2048, 00:15:09.881 "data_size": 63488 00:15:09.881 }, 00:15:09.881 { 00:15:09.881 "name": "BaseBdev4", 00:15:09.881 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:09.881 "is_configured": true, 00:15:09.881 "data_offset": 2048, 00:15:09.881 "data_size": 63488 00:15:09.881 } 00:15:09.881 ] 00:15:09.881 }' 00:15:09.881 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.881 18:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.881 [2024-12-06 18:12:21.919188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:09.881 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.881 Zero copy mechanism will not be used. 00:15:09.881 Running I/O for 60 seconds... 00:15:10.140 18:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.140 18:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.140 18:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.140 [2024-12-06 18:12:22.242288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.140 18:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.140 18:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:10.399 [2024-12-06 18:12:22.328293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:10.399 [2024-12-06 18:12:22.330643] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:10.399 [2024-12-06 18:12:22.451660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:10.399 [2024-12-06 18:12:22.453262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:10.658 [2024-12-06 18:12:22.691133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:10.658 [2024-12-06 18:12:22.691611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:10.917 158.00 IOPS, 474.00 MiB/s [2024-12-06T18:12:23.085Z] [2024-12-06 18:12:22.931628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:11.176 [2024-12-06 18:12:23.146499] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:11.176 [2024-12-06 18:12:23.147011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:11.176 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.176 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.176 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.176 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.176 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.176 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.176 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.176 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.176 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.177 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.436 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.436 "name": "raid_bdev1", 00:15:11.436 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:11.436 "strip_size_kb": 0, 00:15:11.436 "state": "online", 00:15:11.436 "raid_level": "raid1", 00:15:11.436 "superblock": true, 00:15:11.436 "num_base_bdevs": 4, 00:15:11.436 "num_base_bdevs_discovered": 4, 00:15:11.436 "num_base_bdevs_operational": 4, 00:15:11.436 "process": { 00:15:11.436 "type": "rebuild", 00:15:11.436 "target": "spare", 00:15:11.436 "progress": { 00:15:11.436 "blocks": 12288, 00:15:11.436 "percent": 19 00:15:11.436 } 00:15:11.436 }, 00:15:11.436 "base_bdevs_list": [ 00:15:11.436 { 00:15:11.436 "name": "spare", 00:15:11.436 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:11.436 "is_configured": true, 00:15:11.436 "data_offset": 2048, 00:15:11.436 "data_size": 63488 00:15:11.436 }, 00:15:11.436 { 00:15:11.436 "name": "BaseBdev2", 00:15:11.436 "uuid": "dbf53e57-215b-594e-98db-cdd1a6ebdb64", 00:15:11.436 "is_configured": true, 00:15:11.436 "data_offset": 2048, 00:15:11.436 "data_size": 63488 00:15:11.436 }, 00:15:11.436 { 00:15:11.436 "name": "BaseBdev3", 00:15:11.436 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:11.436 "is_configured": true, 00:15:11.436 "data_offset": 2048, 00:15:11.436 "data_size": 63488 00:15:11.436 }, 00:15:11.436 { 00:15:11.436 "name": "BaseBdev4", 00:15:11.436 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:11.436 "is_configured": true, 00:15:11.436 "data_offset": 2048, 00:15:11.436 "data_size": 63488 00:15:11.436 } 00:15:11.436 ] 00:15:11.436 }' 00:15:11.436 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.436 [2024-12-06 18:12:23.400065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:11.436 [2024-12-06 18:12:23.401798] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:11.436 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.436 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.436 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.436 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:11.436 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.436 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.436 [2024-12-06 18:12:23.446837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.695 [2024-12-06 18:12:23.629293] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.695 [2024-12-06 18:12:23.651229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.695 [2024-12-06 18:12:23.651458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.695 [2024-12-06 18:12:23.651499] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.695 [2024-12-06 18:12:23.689793] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.695 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.695 "name": "raid_bdev1", 00:15:11.695 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:11.695 "strip_size_kb": 0, 00:15:11.695 "state": "online", 00:15:11.695 "raid_level": "raid1", 00:15:11.695 "superblock": true, 00:15:11.695 "num_base_bdevs": 4, 00:15:11.695 "num_base_bdevs_discovered": 3, 00:15:11.695 "num_base_bdevs_operational": 3, 00:15:11.695 "base_bdevs_list": [ 00:15:11.695 { 00:15:11.695 "name": null, 00:15:11.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.695 "is_configured": false, 00:15:11.695 "data_offset": 0, 00:15:11.695 "data_size": 63488 00:15:11.695 }, 00:15:11.695 { 00:15:11.695 "name": "BaseBdev2", 00:15:11.695 "uuid": "dbf53e57-215b-594e-98db-cdd1a6ebdb64", 00:15:11.695 "is_configured": true, 00:15:11.695 "data_offset": 2048, 00:15:11.695 "data_size": 63488 00:15:11.695 }, 00:15:11.695 { 00:15:11.695 "name": "BaseBdev3", 00:15:11.695 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:11.695 "is_configured": true, 00:15:11.695 "data_offset": 2048, 00:15:11.695 "data_size": 63488 00:15:11.695 }, 00:15:11.696 { 00:15:11.696 "name": "BaseBdev4", 00:15:11.696 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:11.696 "is_configured": true, 00:15:11.696 "data_offset": 2048, 00:15:11.696 "data_size": 63488 00:15:11.696 } 00:15:11.696 ] 00:15:11.696 }' 00:15:11.696 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.696 18:12:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.212 118.50 IOPS, 355.50 MiB/s [2024-12-06T18:12:24.380Z] 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.212 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.212 "name": "raid_bdev1", 00:15:12.212 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:12.212 "strip_size_kb": 0, 00:15:12.212 "state": "online", 00:15:12.212 "raid_level": "raid1", 00:15:12.212 "superblock": true, 00:15:12.212 "num_base_bdevs": 4, 00:15:12.212 "num_base_bdevs_discovered": 3, 00:15:12.212 "num_base_bdevs_operational": 3, 00:15:12.212 "base_bdevs_list": [ 00:15:12.212 { 00:15:12.212 "name": null, 00:15:12.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.212 "is_configured": false, 00:15:12.212 "data_offset": 0, 00:15:12.212 "data_size": 63488 00:15:12.212 }, 00:15:12.212 { 00:15:12.212 "name": "BaseBdev2", 00:15:12.212 "uuid": "dbf53e57-215b-594e-98db-cdd1a6ebdb64", 00:15:12.212 "is_configured": true, 00:15:12.212 "data_offset": 2048, 00:15:12.212 "data_size": 63488 00:15:12.212 }, 00:15:12.212 { 00:15:12.212 "name": "BaseBdev3", 00:15:12.212 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:12.212 "is_configured": true, 00:15:12.213 "data_offset": 2048, 00:15:12.213 "data_size": 63488 00:15:12.213 }, 00:15:12.213 { 00:15:12.213 "name": "BaseBdev4", 00:15:12.213 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:12.213 "is_configured": true, 00:15:12.213 "data_offset": 2048, 00:15:12.213 "data_size": 63488 00:15:12.213 } 00:15:12.213 ] 00:15:12.213 }' 00:15:12.213 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.213 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.213 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.213 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.213 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.213 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.213 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.213 [2024-12-06 18:12:24.350313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.472 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.472 18:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:12.472 [2024-12-06 18:12:24.409742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:12.472 [2024-12-06 18:12:24.411921] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.472 [2024-12-06 18:12:24.531399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:12.472 [2024-12-06 18:12:24.532168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:12.731 [2024-12-06 18:12:24.652097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:12.731 [2024-12-06 18:12:24.652463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:12.989 [2024-12-06 18:12:24.916277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:12.989 136.33 IOPS, 409.00 MiB/s [2024-12-06T18:12:25.157Z] [2024-12-06 18:12:25.053238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:12.989 [2024-12-06 18:12:25.053673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:13.246 [2024-12-06 18:12:25.278185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:13.246 [2024-12-06 18:12:25.278805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:13.246 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.246 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.246 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.246 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.246 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.246 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.246 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.246 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.246 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.504 "name": "raid_bdev1", 00:15:13.504 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:13.504 "strip_size_kb": 0, 00:15:13.504 "state": "online", 00:15:13.504 "raid_level": "raid1", 00:15:13.504 "superblock": true, 00:15:13.504 "num_base_bdevs": 4, 00:15:13.504 "num_base_bdevs_discovered": 4, 00:15:13.504 "num_base_bdevs_operational": 4, 00:15:13.504 "process": { 00:15:13.504 "type": "rebuild", 00:15:13.504 "target": "spare", 00:15:13.504 "progress": { 00:15:13.504 "blocks": 16384, 00:15:13.504 "percent": 25 00:15:13.504 } 00:15:13.504 }, 00:15:13.504 "base_bdevs_list": [ 00:15:13.504 { 00:15:13.504 "name": "spare", 00:15:13.504 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:13.504 "is_configured": true, 00:15:13.504 "data_offset": 2048, 00:15:13.504 "data_size": 63488 00:15:13.504 }, 00:15:13.504 { 00:15:13.504 "name": "BaseBdev2", 00:15:13.504 "uuid": "dbf53e57-215b-594e-98db-cdd1a6ebdb64", 00:15:13.504 "is_configured": true, 00:15:13.504 "data_offset": 2048, 00:15:13.504 "data_size": 63488 00:15:13.504 }, 00:15:13.504 { 00:15:13.504 "name": "BaseBdev3", 00:15:13.504 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:13.504 "is_configured": true, 00:15:13.504 "data_offset": 2048, 00:15:13.504 "data_size": 63488 00:15:13.504 }, 00:15:13.504 { 00:15:13.504 "name": "BaseBdev4", 00:15:13.504 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:13.504 "is_configured": true, 00:15:13.504 "data_offset": 2048, 00:15:13.504 "data_size": 63488 00:15:13.504 } 00:15:13.504 ] 00:15:13.504 }' 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:13.504 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.504 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.504 [2024-12-06 18:12:25.571280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.504 [2024-12-06 18:12:25.597446] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:13.761 [2024-12-06 18:12:25.761867] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:13.761 [2024-12-06 18:12:25.762033] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:13.761 [2024-12-06 18:12:25.766132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:13.761 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.761 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:13.761 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:13.761 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.762 "name": "raid_bdev1", 00:15:13.762 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:13.762 "strip_size_kb": 0, 00:15:13.762 "state": "online", 00:15:13.762 "raid_level": "raid1", 00:15:13.762 "superblock": true, 00:15:13.762 "num_base_bdevs": 4, 00:15:13.762 "num_base_bdevs_discovered": 3, 00:15:13.762 "num_base_bdevs_operational": 3, 00:15:13.762 "process": { 00:15:13.762 "type": "rebuild", 00:15:13.762 "target": "spare", 00:15:13.762 "progress": { 00:15:13.762 "blocks": 20480, 00:15:13.762 "percent": 32 00:15:13.762 } 00:15:13.762 }, 00:15:13.762 "base_bdevs_list": [ 00:15:13.762 { 00:15:13.762 "name": "spare", 00:15:13.762 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:13.762 "is_configured": true, 00:15:13.762 "data_offset": 2048, 00:15:13.762 "data_size": 63488 00:15:13.762 }, 00:15:13.762 { 00:15:13.762 "name": null, 00:15:13.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.762 "is_configured": false, 00:15:13.762 "data_offset": 0, 00:15:13.762 "data_size": 63488 00:15:13.762 }, 00:15:13.762 { 00:15:13.762 "name": "BaseBdev3", 00:15:13.762 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:13.762 "is_configured": true, 00:15:13.762 "data_offset": 2048, 00:15:13.762 "data_size": 63488 00:15:13.762 }, 00:15:13.762 { 00:15:13.762 "name": "BaseBdev4", 00:15:13.762 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:13.762 "is_configured": true, 00:15:13.762 "data_offset": 2048, 00:15:13.762 "data_size": 63488 00:15:13.762 } 00:15:13.762 ] 00:15:13.762 }' 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.762 [2024-12-06 18:12:25.876846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:13.762 [2024-12-06 18:12:25.877254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=519 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.762 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.020 123.00 IOPS, 369.00 MiB/s [2024-12-06T18:12:26.188Z] 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.020 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.020 "name": "raid_bdev1", 00:15:14.020 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:14.020 "strip_size_kb": 0, 00:15:14.020 "state": "online", 00:15:14.020 "raid_level": "raid1", 00:15:14.020 "superblock": true, 00:15:14.020 "num_base_bdevs": 4, 00:15:14.020 "num_base_bdevs_discovered": 3, 00:15:14.020 "num_base_bdevs_operational": 3, 00:15:14.020 "process": { 00:15:14.020 "type": "rebuild", 00:15:14.020 "target": "spare", 00:15:14.020 "progress": { 00:15:14.020 "blocks": 22528, 00:15:14.020 "percent": 35 00:15:14.020 } 00:15:14.020 }, 00:15:14.020 "base_bdevs_list": [ 00:15:14.020 { 00:15:14.020 "name": "spare", 00:15:14.020 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:14.020 "is_configured": true, 00:15:14.020 "data_offset": 2048, 00:15:14.020 "data_size": 63488 00:15:14.020 }, 00:15:14.020 { 00:15:14.020 "name": null, 00:15:14.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.020 "is_configured": false, 00:15:14.020 "data_offset": 0, 00:15:14.020 "data_size": 63488 00:15:14.020 }, 00:15:14.020 { 00:15:14.020 "name": "BaseBdev3", 00:15:14.020 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:14.020 "is_configured": true, 00:15:14.020 "data_offset": 2048, 00:15:14.020 "data_size": 63488 00:15:14.020 }, 00:15:14.020 { 00:15:14.020 "name": "BaseBdev4", 00:15:14.020 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:14.020 "is_configured": true, 00:15:14.020 "data_offset": 2048, 00:15:14.020 "data_size": 63488 00:15:14.020 } 00:15:14.020 ] 00:15:14.020 }' 00:15:14.020 18:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.020 18:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.020 18:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.020 18:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.020 18:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.020 [2024-12-06 18:12:26.107576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:14.586 [2024-12-06 18:12:26.545207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:14.843 [2024-12-06 18:12:26.892915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:14.843 [2024-12-06 18:12:26.894221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:15.101 108.80 IOPS, 326.40 MiB/s [2024-12-06T18:12:27.269Z] 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.101 "name": "raid_bdev1", 00:15:15.101 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:15.101 "strip_size_kb": 0, 00:15:15.101 "state": "online", 00:15:15.101 "raid_level": "raid1", 00:15:15.101 "superblock": true, 00:15:15.101 "num_base_bdevs": 4, 00:15:15.101 "num_base_bdevs_discovered": 3, 00:15:15.101 "num_base_bdevs_operational": 3, 00:15:15.101 "process": { 00:15:15.101 "type": "rebuild", 00:15:15.101 "target": "spare", 00:15:15.101 "progress": { 00:15:15.101 "blocks": 38912, 00:15:15.101 "percent": 61 00:15:15.101 } 00:15:15.101 }, 00:15:15.101 "base_bdevs_list": [ 00:15:15.101 { 00:15:15.101 "name": "spare", 00:15:15.101 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:15.101 "is_configured": true, 00:15:15.101 "data_offset": 2048, 00:15:15.101 "data_size": 63488 00:15:15.101 }, 00:15:15.101 { 00:15:15.101 "name": null, 00:15:15.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.101 "is_configured": false, 00:15:15.101 "data_offset": 0, 00:15:15.101 "data_size": 63488 00:15:15.101 }, 00:15:15.101 { 00:15:15.101 "name": "BaseBdev3", 00:15:15.101 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:15.101 "is_configured": true, 00:15:15.101 "data_offset": 2048, 00:15:15.101 "data_size": 63488 00:15:15.101 }, 00:15:15.101 { 00:15:15.101 "name": "BaseBdev4", 00:15:15.101 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:15.101 "is_configured": true, 00:15:15.101 "data_offset": 2048, 00:15:15.101 "data_size": 63488 00:15:15.101 } 00:15:15.101 ] 00:15:15.101 }' 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.101 18:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.035 97.00 IOPS, 291.00 MiB/s [2024-12-06T18:12:28.203Z] [2024-12-06 18:12:27.993148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.293 "name": "raid_bdev1", 00:15:16.293 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:16.293 "strip_size_kb": 0, 00:15:16.293 "state": "online", 00:15:16.293 "raid_level": "raid1", 00:15:16.293 "superblock": true, 00:15:16.293 "num_base_bdevs": 4, 00:15:16.293 "num_base_bdevs_discovered": 3, 00:15:16.293 "num_base_bdevs_operational": 3, 00:15:16.293 "process": { 00:15:16.293 "type": "rebuild", 00:15:16.293 "target": "spare", 00:15:16.293 "progress": { 00:15:16.293 "blocks": 59392, 00:15:16.293 "percent": 93 00:15:16.293 } 00:15:16.293 }, 00:15:16.293 "base_bdevs_list": [ 00:15:16.293 { 00:15:16.293 "name": "spare", 00:15:16.293 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 2048, 00:15:16.293 "data_size": 63488 00:15:16.293 }, 00:15:16.293 { 00:15:16.293 "name": null, 00:15:16.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.293 "is_configured": false, 00:15:16.293 "data_offset": 0, 00:15:16.293 "data_size": 63488 00:15:16.293 }, 00:15:16.293 { 00:15:16.293 "name": "BaseBdev3", 00:15:16.293 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 2048, 00:15:16.293 "data_size": 63488 00:15:16.293 }, 00:15:16.293 { 00:15:16.293 "name": "BaseBdev4", 00:15:16.293 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:16.293 "is_configured": true, 00:15:16.293 "data_offset": 2048, 00:15:16.293 "data_size": 63488 00:15:16.293 } 00:15:16.293 ] 00:15:16.293 }' 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.293 18:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.293 [2024-12-06 18:12:28.409418] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:16.293 [2024-12-06 18:12:28.443244] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:16.293 [2024-12-06 18:12:28.448040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.425 90.29 IOPS, 270.86 MiB/s [2024-12-06T18:12:29.593Z] 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.425 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.425 "name": "raid_bdev1", 00:15:17.425 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:17.425 "strip_size_kb": 0, 00:15:17.425 "state": "online", 00:15:17.425 "raid_level": "raid1", 00:15:17.425 "superblock": true, 00:15:17.425 "num_base_bdevs": 4, 00:15:17.425 "num_base_bdevs_discovered": 3, 00:15:17.425 "num_base_bdevs_operational": 3, 00:15:17.425 "base_bdevs_list": [ 00:15:17.425 { 00:15:17.425 "name": "spare", 00:15:17.425 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:17.425 "is_configured": true, 00:15:17.425 "data_offset": 2048, 00:15:17.425 "data_size": 63488 00:15:17.425 }, 00:15:17.425 { 00:15:17.425 "name": null, 00:15:17.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.425 "is_configured": false, 00:15:17.425 "data_offset": 0, 00:15:17.425 "data_size": 63488 00:15:17.425 }, 00:15:17.425 { 00:15:17.425 "name": "BaseBdev3", 00:15:17.425 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:17.425 "is_configured": true, 00:15:17.425 "data_offset": 2048, 00:15:17.425 "data_size": 63488 00:15:17.425 }, 00:15:17.425 { 00:15:17.425 "name": "BaseBdev4", 00:15:17.426 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:17.426 "is_configured": true, 00:15:17.426 "data_offset": 2048, 00:15:17.426 "data_size": 63488 00:15:17.426 } 00:15:17.426 ] 00:15:17.426 }' 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.426 "name": "raid_bdev1", 00:15:17.426 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:17.426 "strip_size_kb": 0, 00:15:17.426 "state": "online", 00:15:17.426 "raid_level": "raid1", 00:15:17.426 "superblock": true, 00:15:17.426 "num_base_bdevs": 4, 00:15:17.426 "num_base_bdevs_discovered": 3, 00:15:17.426 "num_base_bdevs_operational": 3, 00:15:17.426 "base_bdevs_list": [ 00:15:17.426 { 00:15:17.426 "name": "spare", 00:15:17.426 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:17.426 "is_configured": true, 00:15:17.426 "data_offset": 2048, 00:15:17.426 "data_size": 63488 00:15:17.426 }, 00:15:17.426 { 00:15:17.426 "name": null, 00:15:17.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.426 "is_configured": false, 00:15:17.426 "data_offset": 0, 00:15:17.426 "data_size": 63488 00:15:17.426 }, 00:15:17.426 { 00:15:17.426 "name": "BaseBdev3", 00:15:17.426 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:17.426 "is_configured": true, 00:15:17.426 "data_offset": 2048, 00:15:17.426 "data_size": 63488 00:15:17.426 }, 00:15:17.426 { 00:15:17.426 "name": "BaseBdev4", 00:15:17.426 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:17.426 "is_configured": true, 00:15:17.426 "data_offset": 2048, 00:15:17.426 "data_size": 63488 00:15:17.426 } 00:15:17.426 ] 00:15:17.426 }' 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.426 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.686 "name": "raid_bdev1", 00:15:17.686 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:17.686 "strip_size_kb": 0, 00:15:17.686 "state": "online", 00:15:17.686 "raid_level": "raid1", 00:15:17.686 "superblock": true, 00:15:17.686 "num_base_bdevs": 4, 00:15:17.686 "num_base_bdevs_discovered": 3, 00:15:17.686 "num_base_bdevs_operational": 3, 00:15:17.686 "base_bdevs_list": [ 00:15:17.686 { 00:15:17.686 "name": "spare", 00:15:17.686 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:17.686 "is_configured": true, 00:15:17.686 "data_offset": 2048, 00:15:17.686 "data_size": 63488 00:15:17.686 }, 00:15:17.686 { 00:15:17.686 "name": null, 00:15:17.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.686 "is_configured": false, 00:15:17.686 "data_offset": 0, 00:15:17.686 "data_size": 63488 00:15:17.686 }, 00:15:17.686 { 00:15:17.686 "name": "BaseBdev3", 00:15:17.686 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:17.686 "is_configured": true, 00:15:17.686 "data_offset": 2048, 00:15:17.686 "data_size": 63488 00:15:17.686 }, 00:15:17.686 { 00:15:17.686 "name": "BaseBdev4", 00:15:17.686 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:17.686 "is_configured": true, 00:15:17.686 "data_offset": 2048, 00:15:17.686 "data_size": 63488 00:15:17.686 } 00:15:17.686 ] 00:15:17.686 }' 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.686 18:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.945 82.38 IOPS, 247.12 MiB/s [2024-12-06T18:12:30.113Z] 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.945 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.945 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.945 [2024-12-06 18:12:30.100195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.945 [2024-12-06 18:12:30.100313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.264 00:15:18.264 Latency(us) 00:15:18.264 [2024-12-06T18:12:30.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.264 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:18.264 raid_bdev1 : 8.29 80.54 241.61 0.00 0.00 16951.57 436.43 112641.79 00:15:18.264 [2024-12-06T18:12:30.432Z] =================================================================================================================== 00:15:18.264 [2024-12-06T18:12:30.432Z] Total : 80.54 241.61 0.00 0.00 16951.57 436.43 112641.79 00:15:18.264 [2024-12-06 18:12:30.226567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.264 [2024-12-06 18:12:30.226742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.264 [2024-12-06 18:12:30.226867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.264 [2024-12-06 18:12:30.226879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:18.264 { 00:15:18.264 "results": [ 00:15:18.264 { 00:15:18.264 "job": "raid_bdev1", 00:15:18.264 "core_mask": "0x1", 00:15:18.264 "workload": "randrw", 00:15:18.264 "percentage": 50, 00:15:18.264 "status": "finished", 00:15:18.264 "queue_depth": 2, 00:15:18.264 "io_size": 3145728, 00:15:18.264 "runtime": 8.294364, 00:15:18.264 "iops": 80.53661498337908, 00:15:18.264 "mibps": 241.60984495013724, 00:15:18.264 "io_failed": 0, 00:15:18.264 "io_timeout": 0, 00:15:18.264 "avg_latency_us": 16951.57120518788, 00:15:18.264 "min_latency_us": 436.4296943231441, 00:15:18.264 "max_latency_us": 112641.78864628822 00:15:18.264 } 00:15:18.264 ], 00:15:18.264 "core_count": 1 00:15:18.264 } 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.264 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:18.560 /dev/nbd0 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.560 1+0 records in 00:15:18.560 1+0 records out 00:15:18.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439512 s, 9.3 MB/s 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.560 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:18.819 /dev/nbd1 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.819 1+0 records in 00:15:18.819 1+0 records out 00:15:18.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029745 s, 13.8 MB/s 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.819 18:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:19.078 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:19.078 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.078 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:19.078 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.078 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:19.078 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.078 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.337 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:19.596 /dev/nbd1 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.596 1+0 records in 00:15:19.596 1+0 records out 00:15:19.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290743 s, 14.1 MB/s 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.596 18:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.163 [2024-12-06 18:12:32.318108] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.163 [2024-12-06 18:12:32.318197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.163 [2024-12-06 18:12:32.318229] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:20.163 [2024-12-06 18:12:32.318240] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.163 [2024-12-06 18:12:32.320880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.163 [2024-12-06 18:12:32.320929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.163 [2024-12-06 18:12:32.321057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:20.163 [2024-12-06 18:12:32.321146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.163 [2024-12-06 18:12:32.321327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.163 [2024-12-06 18:12:32.321447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:20.163 spare 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.163 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.422 [2024-12-06 18:12:32.421378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:20.422 [2024-12-06 18:12:32.421427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:20.422 [2024-12-06 18:12:32.421847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:20.422 [2024-12-06 18:12:32.422124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:20.422 [2024-12-06 18:12:32.422147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:20.422 [2024-12-06 18:12:32.422434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.422 "name": "raid_bdev1", 00:15:20.422 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:20.422 "strip_size_kb": 0, 00:15:20.422 "state": "online", 00:15:20.422 "raid_level": "raid1", 00:15:20.422 "superblock": true, 00:15:20.422 "num_base_bdevs": 4, 00:15:20.422 "num_base_bdevs_discovered": 3, 00:15:20.422 "num_base_bdevs_operational": 3, 00:15:20.422 "base_bdevs_list": [ 00:15:20.422 { 00:15:20.422 "name": "spare", 00:15:20.422 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:20.422 "is_configured": true, 00:15:20.422 "data_offset": 2048, 00:15:20.422 "data_size": 63488 00:15:20.422 }, 00:15:20.422 { 00:15:20.422 "name": null, 00:15:20.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.422 "is_configured": false, 00:15:20.422 "data_offset": 2048, 00:15:20.422 "data_size": 63488 00:15:20.422 }, 00:15:20.422 { 00:15:20.422 "name": "BaseBdev3", 00:15:20.422 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:20.422 "is_configured": true, 00:15:20.422 "data_offset": 2048, 00:15:20.422 "data_size": 63488 00:15:20.422 }, 00:15:20.422 { 00:15:20.422 "name": "BaseBdev4", 00:15:20.422 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:20.422 "is_configured": true, 00:15:20.422 "data_offset": 2048, 00:15:20.422 "data_size": 63488 00:15:20.422 } 00:15:20.422 ] 00:15:20.422 }' 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.422 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.988 "name": "raid_bdev1", 00:15:20.988 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:20.988 "strip_size_kb": 0, 00:15:20.988 "state": "online", 00:15:20.988 "raid_level": "raid1", 00:15:20.988 "superblock": true, 00:15:20.988 "num_base_bdevs": 4, 00:15:20.988 "num_base_bdevs_discovered": 3, 00:15:20.988 "num_base_bdevs_operational": 3, 00:15:20.988 "base_bdevs_list": [ 00:15:20.988 { 00:15:20.988 "name": "spare", 00:15:20.988 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:20.988 "is_configured": true, 00:15:20.988 "data_offset": 2048, 00:15:20.988 "data_size": 63488 00:15:20.988 }, 00:15:20.988 { 00:15:20.988 "name": null, 00:15:20.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.988 "is_configured": false, 00:15:20.988 "data_offset": 2048, 00:15:20.988 "data_size": 63488 00:15:20.988 }, 00:15:20.988 { 00:15:20.988 "name": "BaseBdev3", 00:15:20.988 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:20.988 "is_configured": true, 00:15:20.988 "data_offset": 2048, 00:15:20.988 "data_size": 63488 00:15:20.988 }, 00:15:20.988 { 00:15:20.988 "name": "BaseBdev4", 00:15:20.988 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:20.988 "is_configured": true, 00:15:20.988 "data_offset": 2048, 00:15:20.988 "data_size": 63488 00:15:20.988 } 00:15:20.988 ] 00:15:20.988 }' 00:15:20.988 18:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.988 [2024-12-06 18:12:33.081421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.988 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.989 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.989 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.989 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.989 "name": "raid_bdev1", 00:15:20.989 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:20.989 "strip_size_kb": 0, 00:15:20.989 "state": "online", 00:15:20.989 "raid_level": "raid1", 00:15:20.989 "superblock": true, 00:15:20.989 "num_base_bdevs": 4, 00:15:20.989 "num_base_bdevs_discovered": 2, 00:15:20.989 "num_base_bdevs_operational": 2, 00:15:20.989 "base_bdevs_list": [ 00:15:20.989 { 00:15:20.989 "name": null, 00:15:20.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.989 "is_configured": false, 00:15:20.989 "data_offset": 0, 00:15:20.989 "data_size": 63488 00:15:20.989 }, 00:15:20.989 { 00:15:20.989 "name": null, 00:15:20.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.989 "is_configured": false, 00:15:20.989 "data_offset": 2048, 00:15:20.989 "data_size": 63488 00:15:20.989 }, 00:15:20.989 { 00:15:20.989 "name": "BaseBdev3", 00:15:20.989 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:20.989 "is_configured": true, 00:15:20.989 "data_offset": 2048, 00:15:20.989 "data_size": 63488 00:15:20.989 }, 00:15:20.989 { 00:15:20.989 "name": "BaseBdev4", 00:15:20.989 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:20.989 "is_configured": true, 00:15:20.989 "data_offset": 2048, 00:15:20.989 "data_size": 63488 00:15:20.989 } 00:15:20.989 ] 00:15:20.989 }' 00:15:20.989 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.989 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.554 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.554 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.554 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.554 [2024-12-06 18:12:33.576737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.554 [2024-12-06 18:12:33.576987] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:21.554 [2024-12-06 18:12:33.577004] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:21.554 [2024-12-06 18:12:33.577059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.554 [2024-12-06 18:12:33.594873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:21.554 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.554 18:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:21.554 [2024-12-06 18:12:33.597172] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.487 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.487 "name": "raid_bdev1", 00:15:22.487 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:22.487 "strip_size_kb": 0, 00:15:22.487 "state": "online", 00:15:22.487 "raid_level": "raid1", 00:15:22.487 "superblock": true, 00:15:22.487 "num_base_bdevs": 4, 00:15:22.487 "num_base_bdevs_discovered": 3, 00:15:22.487 "num_base_bdevs_operational": 3, 00:15:22.487 "process": { 00:15:22.487 "type": "rebuild", 00:15:22.487 "target": "spare", 00:15:22.487 "progress": { 00:15:22.487 "blocks": 20480, 00:15:22.487 "percent": 32 00:15:22.487 } 00:15:22.487 }, 00:15:22.487 "base_bdevs_list": [ 00:15:22.487 { 00:15:22.487 "name": "spare", 00:15:22.487 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:22.487 "is_configured": true, 00:15:22.487 "data_offset": 2048, 00:15:22.487 "data_size": 63488 00:15:22.487 }, 00:15:22.487 { 00:15:22.487 "name": null, 00:15:22.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.487 "is_configured": false, 00:15:22.487 "data_offset": 2048, 00:15:22.487 "data_size": 63488 00:15:22.487 }, 00:15:22.487 { 00:15:22.487 "name": "BaseBdev3", 00:15:22.487 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:22.487 "is_configured": true, 00:15:22.487 "data_offset": 2048, 00:15:22.487 "data_size": 63488 00:15:22.487 }, 00:15:22.487 { 00:15:22.487 "name": "BaseBdev4", 00:15:22.487 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:22.487 "is_configured": true, 00:15:22.487 "data_offset": 2048, 00:15:22.487 "data_size": 63488 00:15:22.487 } 00:15:22.487 ] 00:15:22.487 }' 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.745 [2024-12-06 18:12:34.756574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.745 [2024-12-06 18:12:34.803547] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.745 [2024-12-06 18:12:34.803647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.745 [2024-12-06 18:12:34.803681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.745 [2024-12-06 18:12:34.803690] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.745 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.746 "name": "raid_bdev1", 00:15:22.746 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:22.746 "strip_size_kb": 0, 00:15:22.746 "state": "online", 00:15:22.746 "raid_level": "raid1", 00:15:22.746 "superblock": true, 00:15:22.746 "num_base_bdevs": 4, 00:15:22.746 "num_base_bdevs_discovered": 2, 00:15:22.746 "num_base_bdevs_operational": 2, 00:15:22.746 "base_bdevs_list": [ 00:15:22.746 { 00:15:22.746 "name": null, 00:15:22.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.746 "is_configured": false, 00:15:22.746 "data_offset": 0, 00:15:22.746 "data_size": 63488 00:15:22.746 }, 00:15:22.746 { 00:15:22.746 "name": null, 00:15:22.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.746 "is_configured": false, 00:15:22.746 "data_offset": 2048, 00:15:22.746 "data_size": 63488 00:15:22.746 }, 00:15:22.746 { 00:15:22.746 "name": "BaseBdev3", 00:15:22.746 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:22.746 "is_configured": true, 00:15:22.746 "data_offset": 2048, 00:15:22.746 "data_size": 63488 00:15:22.746 }, 00:15:22.746 { 00:15:22.746 "name": "BaseBdev4", 00:15:22.746 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:22.746 "is_configured": true, 00:15:22.746 "data_offset": 2048, 00:15:22.746 "data_size": 63488 00:15:22.746 } 00:15:22.746 ] 00:15:22.746 }' 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.746 18:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.312 18:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.312 18:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.312 18:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.312 [2024-12-06 18:12:35.265165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.312 [2024-12-06 18:12:35.265247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.312 [2024-12-06 18:12:35.265282] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:23.312 [2024-12-06 18:12:35.265293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.312 [2024-12-06 18:12:35.265842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.312 [2024-12-06 18:12:35.265862] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.312 [2024-12-06 18:12:35.265971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:23.312 [2024-12-06 18:12:35.265986] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:23.312 [2024-12-06 18:12:35.266002] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:23.312 [2024-12-06 18:12:35.266025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.312 [2024-12-06 18:12:35.283699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:23.312 spare 00:15:23.312 18:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.312 18:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:23.312 [2024-12-06 18:12:35.286006] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.244 "name": "raid_bdev1", 00:15:24.244 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:24.244 "strip_size_kb": 0, 00:15:24.244 "state": "online", 00:15:24.244 "raid_level": "raid1", 00:15:24.244 "superblock": true, 00:15:24.244 "num_base_bdevs": 4, 00:15:24.244 "num_base_bdevs_discovered": 3, 00:15:24.244 "num_base_bdevs_operational": 3, 00:15:24.244 "process": { 00:15:24.244 "type": "rebuild", 00:15:24.244 "target": "spare", 00:15:24.244 "progress": { 00:15:24.244 "blocks": 20480, 00:15:24.244 "percent": 32 00:15:24.244 } 00:15:24.244 }, 00:15:24.244 "base_bdevs_list": [ 00:15:24.244 { 00:15:24.244 "name": "spare", 00:15:24.244 "uuid": "378d076f-6165-5224-bbca-5f04d55ca74c", 00:15:24.244 "is_configured": true, 00:15:24.244 "data_offset": 2048, 00:15:24.244 "data_size": 63488 00:15:24.244 }, 00:15:24.244 { 00:15:24.244 "name": null, 00:15:24.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.244 "is_configured": false, 00:15:24.244 "data_offset": 2048, 00:15:24.244 "data_size": 63488 00:15:24.244 }, 00:15:24.244 { 00:15:24.244 "name": "BaseBdev3", 00:15:24.244 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:24.244 "is_configured": true, 00:15:24.244 "data_offset": 2048, 00:15:24.244 "data_size": 63488 00:15:24.244 }, 00:15:24.244 { 00:15:24.244 "name": "BaseBdev4", 00:15:24.244 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:24.244 "is_configured": true, 00:15:24.244 "data_offset": 2048, 00:15:24.244 "data_size": 63488 00:15:24.244 } 00:15:24.244 ] 00:15:24.244 }' 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.244 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.503 [2024-12-06 18:12:36.425373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.503 [2024-12-06 18:12:36.492336] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:24.503 [2024-12-06 18:12:36.492475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.503 [2024-12-06 18:12:36.492501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.503 [2024-12-06 18:12:36.492513] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.503 "name": "raid_bdev1", 00:15:24.503 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:24.503 "strip_size_kb": 0, 00:15:24.503 "state": "online", 00:15:24.503 "raid_level": "raid1", 00:15:24.503 "superblock": true, 00:15:24.503 "num_base_bdevs": 4, 00:15:24.503 "num_base_bdevs_discovered": 2, 00:15:24.503 "num_base_bdevs_operational": 2, 00:15:24.503 "base_bdevs_list": [ 00:15:24.503 { 00:15:24.503 "name": null, 00:15:24.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.503 "is_configured": false, 00:15:24.503 "data_offset": 0, 00:15:24.503 "data_size": 63488 00:15:24.503 }, 00:15:24.503 { 00:15:24.503 "name": null, 00:15:24.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.503 "is_configured": false, 00:15:24.503 "data_offset": 2048, 00:15:24.503 "data_size": 63488 00:15:24.503 }, 00:15:24.503 { 00:15:24.503 "name": "BaseBdev3", 00:15:24.503 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:24.503 "is_configured": true, 00:15:24.503 "data_offset": 2048, 00:15:24.503 "data_size": 63488 00:15:24.503 }, 00:15:24.503 { 00:15:24.503 "name": "BaseBdev4", 00:15:24.503 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:24.503 "is_configured": true, 00:15:24.503 "data_offset": 2048, 00:15:24.503 "data_size": 63488 00:15:24.503 } 00:15:24.503 ] 00:15:24.503 }' 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.503 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.071 "name": "raid_bdev1", 00:15:25.071 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:25.071 "strip_size_kb": 0, 00:15:25.071 "state": "online", 00:15:25.071 "raid_level": "raid1", 00:15:25.071 "superblock": true, 00:15:25.071 "num_base_bdevs": 4, 00:15:25.071 "num_base_bdevs_discovered": 2, 00:15:25.071 "num_base_bdevs_operational": 2, 00:15:25.071 "base_bdevs_list": [ 00:15:25.071 { 00:15:25.071 "name": null, 00:15:25.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.071 "is_configured": false, 00:15:25.071 "data_offset": 0, 00:15:25.071 "data_size": 63488 00:15:25.071 }, 00:15:25.071 { 00:15:25.071 "name": null, 00:15:25.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.071 "is_configured": false, 00:15:25.071 "data_offset": 2048, 00:15:25.071 "data_size": 63488 00:15:25.071 }, 00:15:25.071 { 00:15:25.071 "name": "BaseBdev3", 00:15:25.071 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:25.071 "is_configured": true, 00:15:25.071 "data_offset": 2048, 00:15:25.071 "data_size": 63488 00:15:25.071 }, 00:15:25.071 { 00:15:25.071 "name": "BaseBdev4", 00:15:25.071 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:25.071 "is_configured": true, 00:15:25.071 "data_offset": 2048, 00:15:25.071 "data_size": 63488 00:15:25.071 } 00:15:25.071 ] 00:15:25.071 }' 00:15:25.071 18:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.071 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.071 [2024-12-06 18:12:37.099182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:25.071 [2024-12-06 18:12:37.099261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.071 [2024-12-06 18:12:37.099285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:25.071 [2024-12-06 18:12:37.099298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.071 [2024-12-06 18:12:37.099893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.071 [2024-12-06 18:12:37.099918] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.071 [2024-12-06 18:12:37.100021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:25.072 [2024-12-06 18:12:37.100041] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:25.072 [2024-12-06 18:12:37.100051] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:25.072 [2024-12-06 18:12:37.100088] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:25.072 BaseBdev1 00:15:25.072 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.072 18:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.016 "name": "raid_bdev1", 00:15:26.016 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:26.016 "strip_size_kb": 0, 00:15:26.016 "state": "online", 00:15:26.016 "raid_level": "raid1", 00:15:26.016 "superblock": true, 00:15:26.016 "num_base_bdevs": 4, 00:15:26.016 "num_base_bdevs_discovered": 2, 00:15:26.016 "num_base_bdevs_operational": 2, 00:15:26.016 "base_bdevs_list": [ 00:15:26.016 { 00:15:26.016 "name": null, 00:15:26.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.016 "is_configured": false, 00:15:26.016 "data_offset": 0, 00:15:26.016 "data_size": 63488 00:15:26.016 }, 00:15:26.016 { 00:15:26.016 "name": null, 00:15:26.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.016 "is_configured": false, 00:15:26.016 "data_offset": 2048, 00:15:26.016 "data_size": 63488 00:15:26.016 }, 00:15:26.016 { 00:15:26.016 "name": "BaseBdev3", 00:15:26.016 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:26.016 "is_configured": true, 00:15:26.016 "data_offset": 2048, 00:15:26.016 "data_size": 63488 00:15:26.016 }, 00:15:26.016 { 00:15:26.016 "name": "BaseBdev4", 00:15:26.016 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:26.016 "is_configured": true, 00:15:26.016 "data_offset": 2048, 00:15:26.016 "data_size": 63488 00:15:26.016 } 00:15:26.016 ] 00:15:26.016 }' 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.016 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.583 "name": "raid_bdev1", 00:15:26.583 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:26.583 "strip_size_kb": 0, 00:15:26.583 "state": "online", 00:15:26.583 "raid_level": "raid1", 00:15:26.583 "superblock": true, 00:15:26.583 "num_base_bdevs": 4, 00:15:26.583 "num_base_bdevs_discovered": 2, 00:15:26.583 "num_base_bdevs_operational": 2, 00:15:26.583 "base_bdevs_list": [ 00:15:26.583 { 00:15:26.583 "name": null, 00:15:26.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.583 "is_configured": false, 00:15:26.583 "data_offset": 0, 00:15:26.583 "data_size": 63488 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": null, 00:15:26.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.583 "is_configured": false, 00:15:26.583 "data_offset": 2048, 00:15:26.583 "data_size": 63488 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": "BaseBdev3", 00:15:26.583 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:26.583 "is_configured": true, 00:15:26.583 "data_offset": 2048, 00:15:26.583 "data_size": 63488 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": "BaseBdev4", 00:15:26.583 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:26.583 "is_configured": true, 00:15:26.583 "data_offset": 2048, 00:15:26.583 "data_size": 63488 00:15:26.583 } 00:15:26.583 ] 00:15:26.583 }' 00:15:26.583 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.584 [2024-12-06 18:12:38.724741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.584 [2024-12-06 18:12:38.725017] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:26.584 [2024-12-06 18:12:38.725105] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:26.584 request: 00:15:26.584 { 00:15:26.584 "base_bdev": "BaseBdev1", 00:15:26.584 "raid_bdev": "raid_bdev1", 00:15:26.584 "method": "bdev_raid_add_base_bdev", 00:15:26.584 "req_id": 1 00:15:26.584 } 00:15:26.584 Got JSON-RPC error response 00:15:26.584 response: 00:15:26.584 { 00:15:26.584 "code": -22, 00:15:26.584 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:26.584 } 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.584 18:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.962 "name": "raid_bdev1", 00:15:27.962 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:27.962 "strip_size_kb": 0, 00:15:27.962 "state": "online", 00:15:27.962 "raid_level": "raid1", 00:15:27.962 "superblock": true, 00:15:27.962 "num_base_bdevs": 4, 00:15:27.962 "num_base_bdevs_discovered": 2, 00:15:27.962 "num_base_bdevs_operational": 2, 00:15:27.962 "base_bdevs_list": [ 00:15:27.962 { 00:15:27.962 "name": null, 00:15:27.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.962 "is_configured": false, 00:15:27.962 "data_offset": 0, 00:15:27.962 "data_size": 63488 00:15:27.962 }, 00:15:27.962 { 00:15:27.962 "name": null, 00:15:27.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.962 "is_configured": false, 00:15:27.962 "data_offset": 2048, 00:15:27.962 "data_size": 63488 00:15:27.962 }, 00:15:27.962 { 00:15:27.962 "name": "BaseBdev3", 00:15:27.962 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:27.962 "is_configured": true, 00:15:27.962 "data_offset": 2048, 00:15:27.962 "data_size": 63488 00:15:27.962 }, 00:15:27.962 { 00:15:27.962 "name": "BaseBdev4", 00:15:27.962 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:27.962 "is_configured": true, 00:15:27.962 "data_offset": 2048, 00:15:27.962 "data_size": 63488 00:15:27.962 } 00:15:27.962 ] 00:15:27.962 }' 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.962 18:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.221 "name": "raid_bdev1", 00:15:28.221 "uuid": "b5da9c5a-57b0-4d0f-9134-e4a11ace97b0", 00:15:28.221 "strip_size_kb": 0, 00:15:28.221 "state": "online", 00:15:28.221 "raid_level": "raid1", 00:15:28.221 "superblock": true, 00:15:28.221 "num_base_bdevs": 4, 00:15:28.221 "num_base_bdevs_discovered": 2, 00:15:28.221 "num_base_bdevs_operational": 2, 00:15:28.221 "base_bdevs_list": [ 00:15:28.221 { 00:15:28.221 "name": null, 00:15:28.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.221 "is_configured": false, 00:15:28.221 "data_offset": 0, 00:15:28.221 "data_size": 63488 00:15:28.221 }, 00:15:28.221 { 00:15:28.221 "name": null, 00:15:28.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.221 "is_configured": false, 00:15:28.221 "data_offset": 2048, 00:15:28.221 "data_size": 63488 00:15:28.221 }, 00:15:28.221 { 00:15:28.221 "name": "BaseBdev3", 00:15:28.221 "uuid": "bf624df2-8bd8-5383-9e18-bea29348fe90", 00:15:28.221 "is_configured": true, 00:15:28.221 "data_offset": 2048, 00:15:28.221 "data_size": 63488 00:15:28.221 }, 00:15:28.221 { 00:15:28.221 "name": "BaseBdev4", 00:15:28.221 "uuid": "1d687541-4692-5b42-b38a-9ceb9beb127a", 00:15:28.221 "is_configured": true, 00:15:28.221 "data_offset": 2048, 00:15:28.221 "data_size": 63488 00:15:28.221 } 00:15:28.221 ] 00:15:28.221 }' 00:15:28.221 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.480 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79698 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79698 ']' 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79698 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79698 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.481 killing process with pid 79698 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79698' 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79698 00:15:28.481 Received shutdown signal, test time was about 18.598926 seconds 00:15:28.481 00:15:28.481 Latency(us) 00:15:28.481 [2024-12-06T18:12:40.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.481 [2024-12-06T18:12:40.649Z] =================================================================================================================== 00:15:28.481 [2024-12-06T18:12:40.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.481 [2024-12-06 18:12:40.484735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.481 18:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79698 00:15:28.481 [2024-12-06 18:12:40.484926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.481 [2024-12-06 18:12:40.485018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.481 [2024-12-06 18:12:40.485030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:29.046 [2024-12-06 18:12:40.981071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.423 18:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:30.423 00:15:30.423 real 0m22.424s 00:15:30.423 user 0m29.454s 00:15:30.423 sys 0m2.556s 00:15:30.423 18:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.423 18:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.423 ************************************ 00:15:30.423 END TEST raid_rebuild_test_sb_io 00:15:30.423 ************************************ 00:15:30.423 18:12:42 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:30.423 18:12:42 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:30.423 18:12:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:30.423 18:12:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.423 18:12:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.423 ************************************ 00:15:30.423 START TEST raid5f_state_function_test 00:15:30.423 ************************************ 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80431 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80431' 00:15:30.423 Process raid pid: 80431 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80431 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80431 ']' 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.423 18:12:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.423 [2024-12-06 18:12:42.548177] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:15:30.423 [2024-12-06 18:12:42.548409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.685 [2024-12-06 18:12:42.730705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.944 [2024-12-06 18:12:42.865975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.944 [2024-12-06 18:12:43.099355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.944 [2024-12-06 18:12:43.099513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.512 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.513 [2024-12-06 18:12:43.504303] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.513 [2024-12-06 18:12:43.504375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.513 [2024-12-06 18:12:43.504388] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.513 [2024-12-06 18:12:43.504417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.513 [2024-12-06 18:12:43.504424] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.513 [2024-12-06 18:12:43.504435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.513 "name": "Existed_Raid", 00:15:31.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.513 "strip_size_kb": 64, 00:15:31.513 "state": "configuring", 00:15:31.513 "raid_level": "raid5f", 00:15:31.513 "superblock": false, 00:15:31.513 "num_base_bdevs": 3, 00:15:31.513 "num_base_bdevs_discovered": 0, 00:15:31.513 "num_base_bdevs_operational": 3, 00:15:31.513 "base_bdevs_list": [ 00:15:31.513 { 00:15:31.513 "name": "BaseBdev1", 00:15:31.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.513 "is_configured": false, 00:15:31.513 "data_offset": 0, 00:15:31.513 "data_size": 0 00:15:31.513 }, 00:15:31.513 { 00:15:31.513 "name": "BaseBdev2", 00:15:31.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.513 "is_configured": false, 00:15:31.513 "data_offset": 0, 00:15:31.513 "data_size": 0 00:15:31.513 }, 00:15:31.513 { 00:15:31.513 "name": "BaseBdev3", 00:15:31.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.513 "is_configured": false, 00:15:31.513 "data_offset": 0, 00:15:31.513 "data_size": 0 00:15:31.513 } 00:15:31.513 ] 00:15:31.513 }' 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.513 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.771 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.771 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.771 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.772 [2024-12-06 18:12:43.935535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.772 [2024-12-06 18:12:43.935644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.031 [2024-12-06 18:12:43.943534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.031 [2024-12-06 18:12:43.943638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.031 [2024-12-06 18:12:43.943709] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.031 [2024-12-06 18:12:43.943758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.031 [2024-12-06 18:12:43.943803] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.031 [2024-12-06 18:12:43.943843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.031 [2024-12-06 18:12:43.992355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.031 BaseBdev1 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.031 18:12:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.031 [ 00:15:32.031 { 00:15:32.031 "name": "BaseBdev1", 00:15:32.031 "aliases": [ 00:15:32.031 "bc44562a-f7d6-43e3-a7ae-76f9c6ce7b42" 00:15:32.031 ], 00:15:32.031 "product_name": "Malloc disk", 00:15:32.031 "block_size": 512, 00:15:32.031 "num_blocks": 65536, 00:15:32.031 "uuid": "bc44562a-f7d6-43e3-a7ae-76f9c6ce7b42", 00:15:32.031 "assigned_rate_limits": { 00:15:32.031 "rw_ios_per_sec": 0, 00:15:32.031 "rw_mbytes_per_sec": 0, 00:15:32.031 "r_mbytes_per_sec": 0, 00:15:32.031 "w_mbytes_per_sec": 0 00:15:32.031 }, 00:15:32.031 "claimed": true, 00:15:32.031 "claim_type": "exclusive_write", 00:15:32.031 "zoned": false, 00:15:32.031 "supported_io_types": { 00:15:32.031 "read": true, 00:15:32.031 "write": true, 00:15:32.031 "unmap": true, 00:15:32.031 "flush": true, 00:15:32.031 "reset": true, 00:15:32.031 "nvme_admin": false, 00:15:32.031 "nvme_io": false, 00:15:32.031 "nvme_io_md": false, 00:15:32.031 "write_zeroes": true, 00:15:32.031 "zcopy": true, 00:15:32.031 "get_zone_info": false, 00:15:32.031 "zone_management": false, 00:15:32.031 "zone_append": false, 00:15:32.031 "compare": false, 00:15:32.031 "compare_and_write": false, 00:15:32.031 "abort": true, 00:15:32.031 "seek_hole": false, 00:15:32.031 "seek_data": false, 00:15:32.031 "copy": true, 00:15:32.031 "nvme_iov_md": false 00:15:32.031 }, 00:15:32.031 "memory_domains": [ 00:15:32.031 { 00:15:32.031 "dma_device_id": "system", 00:15:32.031 "dma_device_type": 1 00:15:32.031 }, 00:15:32.031 { 00:15:32.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.031 "dma_device_type": 2 00:15:32.031 } 00:15:32.031 ], 00:15:32.031 "driver_specific": {} 00:15:32.031 } 00:15:32.031 ] 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.031 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.032 "name": "Existed_Raid", 00:15:32.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.032 "strip_size_kb": 64, 00:15:32.032 "state": "configuring", 00:15:32.032 "raid_level": "raid5f", 00:15:32.032 "superblock": false, 00:15:32.032 "num_base_bdevs": 3, 00:15:32.032 "num_base_bdevs_discovered": 1, 00:15:32.032 "num_base_bdevs_operational": 3, 00:15:32.032 "base_bdevs_list": [ 00:15:32.032 { 00:15:32.032 "name": "BaseBdev1", 00:15:32.032 "uuid": "bc44562a-f7d6-43e3-a7ae-76f9c6ce7b42", 00:15:32.032 "is_configured": true, 00:15:32.032 "data_offset": 0, 00:15:32.032 "data_size": 65536 00:15:32.032 }, 00:15:32.032 { 00:15:32.032 "name": "BaseBdev2", 00:15:32.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.032 "is_configured": false, 00:15:32.032 "data_offset": 0, 00:15:32.032 "data_size": 0 00:15:32.032 }, 00:15:32.032 { 00:15:32.032 "name": "BaseBdev3", 00:15:32.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.032 "is_configured": false, 00:15:32.032 "data_offset": 0, 00:15:32.032 "data_size": 0 00:15:32.032 } 00:15:32.032 ] 00:15:32.032 }' 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.032 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.601 [2024-12-06 18:12:44.483825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.601 [2024-12-06 18:12:44.483955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.601 [2024-12-06 18:12:44.491924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.601 [2024-12-06 18:12:44.494210] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.601 [2024-12-06 18:12:44.494314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.601 [2024-12-06 18:12:44.494362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.601 [2024-12-06 18:12:44.494391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.601 "name": "Existed_Raid", 00:15:32.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.601 "strip_size_kb": 64, 00:15:32.601 "state": "configuring", 00:15:32.601 "raid_level": "raid5f", 00:15:32.601 "superblock": false, 00:15:32.601 "num_base_bdevs": 3, 00:15:32.601 "num_base_bdevs_discovered": 1, 00:15:32.601 "num_base_bdevs_operational": 3, 00:15:32.601 "base_bdevs_list": [ 00:15:32.601 { 00:15:32.601 "name": "BaseBdev1", 00:15:32.601 "uuid": "bc44562a-f7d6-43e3-a7ae-76f9c6ce7b42", 00:15:32.601 "is_configured": true, 00:15:32.601 "data_offset": 0, 00:15:32.601 "data_size": 65536 00:15:32.601 }, 00:15:32.601 { 00:15:32.601 "name": "BaseBdev2", 00:15:32.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.601 "is_configured": false, 00:15:32.601 "data_offset": 0, 00:15:32.601 "data_size": 0 00:15:32.601 }, 00:15:32.601 { 00:15:32.601 "name": "BaseBdev3", 00:15:32.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.601 "is_configured": false, 00:15:32.601 "data_offset": 0, 00:15:32.601 "data_size": 0 00:15:32.601 } 00:15:32.601 ] 00:15:32.601 }' 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.601 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 [2024-12-06 18:12:44.946775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.860 BaseBdev2 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.860 [ 00:15:32.860 { 00:15:32.860 "name": "BaseBdev2", 00:15:32.860 "aliases": [ 00:15:32.860 "c767699f-5d58-4893-9f9d-ec119cbd0719" 00:15:32.860 ], 00:15:32.860 "product_name": "Malloc disk", 00:15:32.860 "block_size": 512, 00:15:32.860 "num_blocks": 65536, 00:15:32.860 "uuid": "c767699f-5d58-4893-9f9d-ec119cbd0719", 00:15:32.860 "assigned_rate_limits": { 00:15:32.860 "rw_ios_per_sec": 0, 00:15:32.860 "rw_mbytes_per_sec": 0, 00:15:32.860 "r_mbytes_per_sec": 0, 00:15:32.860 "w_mbytes_per_sec": 0 00:15:32.860 }, 00:15:32.860 "claimed": true, 00:15:32.860 "claim_type": "exclusive_write", 00:15:32.860 "zoned": false, 00:15:32.860 "supported_io_types": { 00:15:32.860 "read": true, 00:15:32.860 "write": true, 00:15:32.860 "unmap": true, 00:15:32.860 "flush": true, 00:15:32.860 "reset": true, 00:15:32.860 "nvme_admin": false, 00:15:32.860 "nvme_io": false, 00:15:32.860 "nvme_io_md": false, 00:15:32.860 "write_zeroes": true, 00:15:32.860 "zcopy": true, 00:15:32.860 "get_zone_info": false, 00:15:32.860 "zone_management": false, 00:15:32.860 "zone_append": false, 00:15:32.860 "compare": false, 00:15:32.860 "compare_and_write": false, 00:15:32.860 "abort": true, 00:15:32.860 "seek_hole": false, 00:15:32.860 "seek_data": false, 00:15:32.860 "copy": true, 00:15:32.860 "nvme_iov_md": false 00:15:32.860 }, 00:15:32.860 "memory_domains": [ 00:15:32.860 { 00:15:32.860 "dma_device_id": "system", 00:15:32.860 "dma_device_type": 1 00:15:32.860 }, 00:15:32.860 { 00:15:32.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.860 "dma_device_type": 2 00:15:32.860 } 00:15:32.860 ], 00:15:32.860 "driver_specific": {} 00:15:32.860 } 00:15:32.860 ] 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.860 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.861 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.861 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.861 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.861 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.861 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.861 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.861 18:12:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.861 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.861 18:12:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.861 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.119 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.119 "name": "Existed_Raid", 00:15:33.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.119 "strip_size_kb": 64, 00:15:33.119 "state": "configuring", 00:15:33.119 "raid_level": "raid5f", 00:15:33.119 "superblock": false, 00:15:33.119 "num_base_bdevs": 3, 00:15:33.119 "num_base_bdevs_discovered": 2, 00:15:33.119 "num_base_bdevs_operational": 3, 00:15:33.119 "base_bdevs_list": [ 00:15:33.119 { 00:15:33.119 "name": "BaseBdev1", 00:15:33.119 "uuid": "bc44562a-f7d6-43e3-a7ae-76f9c6ce7b42", 00:15:33.119 "is_configured": true, 00:15:33.119 "data_offset": 0, 00:15:33.119 "data_size": 65536 00:15:33.119 }, 00:15:33.119 { 00:15:33.119 "name": "BaseBdev2", 00:15:33.119 "uuid": "c767699f-5d58-4893-9f9d-ec119cbd0719", 00:15:33.119 "is_configured": true, 00:15:33.119 "data_offset": 0, 00:15:33.119 "data_size": 65536 00:15:33.119 }, 00:15:33.119 { 00:15:33.119 "name": "BaseBdev3", 00:15:33.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.119 "is_configured": false, 00:15:33.119 "data_offset": 0, 00:15:33.119 "data_size": 0 00:15:33.119 } 00:15:33.119 ] 00:15:33.119 }' 00:15:33.119 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.119 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.377 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:33.377 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.377 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.377 [2024-12-06 18:12:45.542861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.377 [2024-12-06 18:12:45.543058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:33.377 [2024-12-06 18:12:45.543133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:33.377 [2024-12-06 18:12:45.543660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:33.636 [2024-12-06 18:12:45.550285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:33.636 [2024-12-06 18:12:45.550384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:33.636 [2024-12-06 18:12:45.550858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.636 BaseBdev3 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.636 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.636 [ 00:15:33.636 { 00:15:33.636 "name": "BaseBdev3", 00:15:33.636 "aliases": [ 00:15:33.636 "b0f93a5a-2e70-43cf-8019-7d7c05eca726" 00:15:33.636 ], 00:15:33.636 "product_name": "Malloc disk", 00:15:33.636 "block_size": 512, 00:15:33.636 "num_blocks": 65536, 00:15:33.636 "uuid": "b0f93a5a-2e70-43cf-8019-7d7c05eca726", 00:15:33.636 "assigned_rate_limits": { 00:15:33.636 "rw_ios_per_sec": 0, 00:15:33.636 "rw_mbytes_per_sec": 0, 00:15:33.636 "r_mbytes_per_sec": 0, 00:15:33.636 "w_mbytes_per_sec": 0 00:15:33.636 }, 00:15:33.636 "claimed": true, 00:15:33.636 "claim_type": "exclusive_write", 00:15:33.636 "zoned": false, 00:15:33.636 "supported_io_types": { 00:15:33.637 "read": true, 00:15:33.637 "write": true, 00:15:33.637 "unmap": true, 00:15:33.637 "flush": true, 00:15:33.637 "reset": true, 00:15:33.637 "nvme_admin": false, 00:15:33.637 "nvme_io": false, 00:15:33.637 "nvme_io_md": false, 00:15:33.637 "write_zeroes": true, 00:15:33.637 "zcopy": true, 00:15:33.637 "get_zone_info": false, 00:15:33.637 "zone_management": false, 00:15:33.637 "zone_append": false, 00:15:33.637 "compare": false, 00:15:33.637 "compare_and_write": false, 00:15:33.637 "abort": true, 00:15:33.637 "seek_hole": false, 00:15:33.637 "seek_data": false, 00:15:33.637 "copy": true, 00:15:33.637 "nvme_iov_md": false 00:15:33.637 }, 00:15:33.637 "memory_domains": [ 00:15:33.637 { 00:15:33.637 "dma_device_id": "system", 00:15:33.637 "dma_device_type": 1 00:15:33.637 }, 00:15:33.637 { 00:15:33.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.637 "dma_device_type": 2 00:15:33.637 } 00:15:33.637 ], 00:15:33.637 "driver_specific": {} 00:15:33.637 } 00:15:33.637 ] 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.637 "name": "Existed_Raid", 00:15:33.637 "uuid": "88e5c55d-3fd8-4423-818d-157936ab36ee", 00:15:33.637 "strip_size_kb": 64, 00:15:33.637 "state": "online", 00:15:33.637 "raid_level": "raid5f", 00:15:33.637 "superblock": false, 00:15:33.637 "num_base_bdevs": 3, 00:15:33.637 "num_base_bdevs_discovered": 3, 00:15:33.637 "num_base_bdevs_operational": 3, 00:15:33.637 "base_bdevs_list": [ 00:15:33.637 { 00:15:33.637 "name": "BaseBdev1", 00:15:33.637 "uuid": "bc44562a-f7d6-43e3-a7ae-76f9c6ce7b42", 00:15:33.637 "is_configured": true, 00:15:33.637 "data_offset": 0, 00:15:33.637 "data_size": 65536 00:15:33.637 }, 00:15:33.637 { 00:15:33.637 "name": "BaseBdev2", 00:15:33.637 "uuid": "c767699f-5d58-4893-9f9d-ec119cbd0719", 00:15:33.637 "is_configured": true, 00:15:33.637 "data_offset": 0, 00:15:33.637 "data_size": 65536 00:15:33.637 }, 00:15:33.637 { 00:15:33.637 "name": "BaseBdev3", 00:15:33.637 "uuid": "b0f93a5a-2e70-43cf-8019-7d7c05eca726", 00:15:33.637 "is_configured": true, 00:15:33.637 "data_offset": 0, 00:15:33.637 "data_size": 65536 00:15:33.637 } 00:15:33.637 ] 00:15:33.637 }' 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.637 18:12:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.895 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.895 [2024-12-06 18:12:46.050427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.153 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.153 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.153 "name": "Existed_Raid", 00:15:34.153 "aliases": [ 00:15:34.153 "88e5c55d-3fd8-4423-818d-157936ab36ee" 00:15:34.153 ], 00:15:34.153 "product_name": "Raid Volume", 00:15:34.153 "block_size": 512, 00:15:34.153 "num_blocks": 131072, 00:15:34.153 "uuid": "88e5c55d-3fd8-4423-818d-157936ab36ee", 00:15:34.154 "assigned_rate_limits": { 00:15:34.154 "rw_ios_per_sec": 0, 00:15:34.154 "rw_mbytes_per_sec": 0, 00:15:34.154 "r_mbytes_per_sec": 0, 00:15:34.154 "w_mbytes_per_sec": 0 00:15:34.154 }, 00:15:34.154 "claimed": false, 00:15:34.154 "zoned": false, 00:15:34.154 "supported_io_types": { 00:15:34.154 "read": true, 00:15:34.154 "write": true, 00:15:34.154 "unmap": false, 00:15:34.154 "flush": false, 00:15:34.154 "reset": true, 00:15:34.154 "nvme_admin": false, 00:15:34.154 "nvme_io": false, 00:15:34.154 "nvme_io_md": false, 00:15:34.154 "write_zeroes": true, 00:15:34.154 "zcopy": false, 00:15:34.154 "get_zone_info": false, 00:15:34.154 "zone_management": false, 00:15:34.154 "zone_append": false, 00:15:34.154 "compare": false, 00:15:34.154 "compare_and_write": false, 00:15:34.154 "abort": false, 00:15:34.154 "seek_hole": false, 00:15:34.154 "seek_data": false, 00:15:34.154 "copy": false, 00:15:34.154 "nvme_iov_md": false 00:15:34.154 }, 00:15:34.154 "driver_specific": { 00:15:34.154 "raid": { 00:15:34.154 "uuid": "88e5c55d-3fd8-4423-818d-157936ab36ee", 00:15:34.154 "strip_size_kb": 64, 00:15:34.154 "state": "online", 00:15:34.154 "raid_level": "raid5f", 00:15:34.154 "superblock": false, 00:15:34.154 "num_base_bdevs": 3, 00:15:34.154 "num_base_bdevs_discovered": 3, 00:15:34.154 "num_base_bdevs_operational": 3, 00:15:34.154 "base_bdevs_list": [ 00:15:34.154 { 00:15:34.154 "name": "BaseBdev1", 00:15:34.154 "uuid": "bc44562a-f7d6-43e3-a7ae-76f9c6ce7b42", 00:15:34.154 "is_configured": true, 00:15:34.154 "data_offset": 0, 00:15:34.154 "data_size": 65536 00:15:34.154 }, 00:15:34.154 { 00:15:34.154 "name": "BaseBdev2", 00:15:34.154 "uuid": "c767699f-5d58-4893-9f9d-ec119cbd0719", 00:15:34.154 "is_configured": true, 00:15:34.154 "data_offset": 0, 00:15:34.154 "data_size": 65536 00:15:34.154 }, 00:15:34.154 { 00:15:34.154 "name": "BaseBdev3", 00:15:34.154 "uuid": "b0f93a5a-2e70-43cf-8019-7d7c05eca726", 00:15:34.154 "is_configured": true, 00:15:34.154 "data_offset": 0, 00:15:34.154 "data_size": 65536 00:15:34.154 } 00:15:34.154 ] 00:15:34.154 } 00:15:34.154 } 00:15:34.154 }' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:34.154 BaseBdev2 00:15:34.154 BaseBdev3' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.154 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.412 [2024-12-06 18:12:46.337735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.412 "name": "Existed_Raid", 00:15:34.412 "uuid": "88e5c55d-3fd8-4423-818d-157936ab36ee", 00:15:34.412 "strip_size_kb": 64, 00:15:34.412 "state": "online", 00:15:34.412 "raid_level": "raid5f", 00:15:34.412 "superblock": false, 00:15:34.412 "num_base_bdevs": 3, 00:15:34.412 "num_base_bdevs_discovered": 2, 00:15:34.412 "num_base_bdevs_operational": 2, 00:15:34.412 "base_bdevs_list": [ 00:15:34.412 { 00:15:34.412 "name": null, 00:15:34.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.412 "is_configured": false, 00:15:34.412 "data_offset": 0, 00:15:34.412 "data_size": 65536 00:15:34.412 }, 00:15:34.412 { 00:15:34.412 "name": "BaseBdev2", 00:15:34.412 "uuid": "c767699f-5d58-4893-9f9d-ec119cbd0719", 00:15:34.412 "is_configured": true, 00:15:34.412 "data_offset": 0, 00:15:34.412 "data_size": 65536 00:15:34.412 }, 00:15:34.412 { 00:15:34.412 "name": "BaseBdev3", 00:15:34.412 "uuid": "b0f93a5a-2e70-43cf-8019-7d7c05eca726", 00:15:34.412 "is_configured": true, 00:15:34.412 "data_offset": 0, 00:15:34.412 "data_size": 65536 00:15:34.412 } 00:15:34.412 ] 00:15:34.412 }' 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.412 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.979 18:12:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.979 [2024-12-06 18:12:46.904219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.979 [2024-12-06 18:12:46.904341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.979 [2024-12-06 18:12:47.015492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.979 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.979 [2024-12-06 18:12:47.075488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:34.979 [2024-12-06 18:12:47.075634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.238 BaseBdev2 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.238 [ 00:15:35.238 { 00:15:35.238 "name": "BaseBdev2", 00:15:35.238 "aliases": [ 00:15:35.238 "29f83041-0a57-4851-9e87-65a1ba4047dd" 00:15:35.238 ], 00:15:35.238 "product_name": "Malloc disk", 00:15:35.238 "block_size": 512, 00:15:35.238 "num_blocks": 65536, 00:15:35.238 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:35.238 "assigned_rate_limits": { 00:15:35.238 "rw_ios_per_sec": 0, 00:15:35.238 "rw_mbytes_per_sec": 0, 00:15:35.238 "r_mbytes_per_sec": 0, 00:15:35.238 "w_mbytes_per_sec": 0 00:15:35.238 }, 00:15:35.238 "claimed": false, 00:15:35.238 "zoned": false, 00:15:35.238 "supported_io_types": { 00:15:35.238 "read": true, 00:15:35.238 "write": true, 00:15:35.238 "unmap": true, 00:15:35.238 "flush": true, 00:15:35.238 "reset": true, 00:15:35.238 "nvme_admin": false, 00:15:35.238 "nvme_io": false, 00:15:35.238 "nvme_io_md": false, 00:15:35.238 "write_zeroes": true, 00:15:35.238 "zcopy": true, 00:15:35.238 "get_zone_info": false, 00:15:35.238 "zone_management": false, 00:15:35.238 "zone_append": false, 00:15:35.238 "compare": false, 00:15:35.238 "compare_and_write": false, 00:15:35.238 "abort": true, 00:15:35.238 "seek_hole": false, 00:15:35.238 "seek_data": false, 00:15:35.238 "copy": true, 00:15:35.238 "nvme_iov_md": false 00:15:35.238 }, 00:15:35.238 "memory_domains": [ 00:15:35.238 { 00:15:35.238 "dma_device_id": "system", 00:15:35.238 "dma_device_type": 1 00:15:35.238 }, 00:15:35.238 { 00:15:35.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.238 "dma_device_type": 2 00:15:35.238 } 00:15:35.238 ], 00:15:35.238 "driver_specific": {} 00:15:35.238 } 00:15:35.238 ] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.238 BaseBdev3 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.238 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.238 [ 00:15:35.238 { 00:15:35.238 "name": "BaseBdev3", 00:15:35.238 "aliases": [ 00:15:35.238 "45adec1a-2677-441d-b5d8-08aa490028b4" 00:15:35.238 ], 00:15:35.238 "product_name": "Malloc disk", 00:15:35.238 "block_size": 512, 00:15:35.238 "num_blocks": 65536, 00:15:35.238 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:35.238 "assigned_rate_limits": { 00:15:35.238 "rw_ios_per_sec": 0, 00:15:35.238 "rw_mbytes_per_sec": 0, 00:15:35.238 "r_mbytes_per_sec": 0, 00:15:35.238 "w_mbytes_per_sec": 0 00:15:35.238 }, 00:15:35.238 "claimed": false, 00:15:35.238 "zoned": false, 00:15:35.238 "supported_io_types": { 00:15:35.238 "read": true, 00:15:35.238 "write": true, 00:15:35.238 "unmap": true, 00:15:35.238 "flush": true, 00:15:35.238 "reset": true, 00:15:35.238 "nvme_admin": false, 00:15:35.238 "nvme_io": false, 00:15:35.238 "nvme_io_md": false, 00:15:35.238 "write_zeroes": true, 00:15:35.238 "zcopy": true, 00:15:35.238 "get_zone_info": false, 00:15:35.238 "zone_management": false, 00:15:35.238 "zone_append": false, 00:15:35.238 "compare": false, 00:15:35.238 "compare_and_write": false, 00:15:35.498 "abort": true, 00:15:35.498 "seek_hole": false, 00:15:35.498 "seek_data": false, 00:15:35.498 "copy": true, 00:15:35.498 "nvme_iov_md": false 00:15:35.498 }, 00:15:35.498 "memory_domains": [ 00:15:35.498 { 00:15:35.498 "dma_device_id": "system", 00:15:35.498 "dma_device_type": 1 00:15:35.498 }, 00:15:35.498 { 00:15:35.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.498 "dma_device_type": 2 00:15:35.498 } 00:15:35.498 ], 00:15:35.498 "driver_specific": {} 00:15:35.498 } 00:15:35.498 ] 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.498 [2024-12-06 18:12:47.415155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.498 [2024-12-06 18:12:47.415303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.498 [2024-12-06 18:12:47.415347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.498 [2024-12-06 18:12:47.417625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.498 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.498 "name": "Existed_Raid", 00:15:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.498 "strip_size_kb": 64, 00:15:35.498 "state": "configuring", 00:15:35.498 "raid_level": "raid5f", 00:15:35.498 "superblock": false, 00:15:35.498 "num_base_bdevs": 3, 00:15:35.498 "num_base_bdevs_discovered": 2, 00:15:35.498 "num_base_bdevs_operational": 3, 00:15:35.498 "base_bdevs_list": [ 00:15:35.498 { 00:15:35.498 "name": "BaseBdev1", 00:15:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.498 "is_configured": false, 00:15:35.498 "data_offset": 0, 00:15:35.498 "data_size": 0 00:15:35.498 }, 00:15:35.498 { 00:15:35.498 "name": "BaseBdev2", 00:15:35.498 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:35.498 "is_configured": true, 00:15:35.498 "data_offset": 0, 00:15:35.498 "data_size": 65536 00:15:35.498 }, 00:15:35.498 { 00:15:35.498 "name": "BaseBdev3", 00:15:35.499 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:35.499 "is_configured": true, 00:15:35.499 "data_offset": 0, 00:15:35.499 "data_size": 65536 00:15:35.499 } 00:15:35.499 ] 00:15:35.499 }' 00:15:35.499 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.499 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.760 [2024-12-06 18:12:47.898303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.760 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.031 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.031 "name": "Existed_Raid", 00:15:36.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.031 "strip_size_kb": 64, 00:15:36.031 "state": "configuring", 00:15:36.031 "raid_level": "raid5f", 00:15:36.031 "superblock": false, 00:15:36.031 "num_base_bdevs": 3, 00:15:36.031 "num_base_bdevs_discovered": 1, 00:15:36.031 "num_base_bdevs_operational": 3, 00:15:36.031 "base_bdevs_list": [ 00:15:36.031 { 00:15:36.031 "name": "BaseBdev1", 00:15:36.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.031 "is_configured": false, 00:15:36.031 "data_offset": 0, 00:15:36.031 "data_size": 0 00:15:36.031 }, 00:15:36.031 { 00:15:36.031 "name": null, 00:15:36.031 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:36.031 "is_configured": false, 00:15:36.031 "data_offset": 0, 00:15:36.031 "data_size": 65536 00:15:36.031 }, 00:15:36.031 { 00:15:36.031 "name": "BaseBdev3", 00:15:36.031 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:36.031 "is_configured": true, 00:15:36.031 "data_offset": 0, 00:15:36.031 "data_size": 65536 00:15:36.031 } 00:15:36.031 ] 00:15:36.031 }' 00:15:36.031 18:12:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.031 18:12:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.310 [2024-12-06 18:12:48.448001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.310 BaseBdev1 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.310 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.311 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.311 [ 00:15:36.569 { 00:15:36.569 "name": "BaseBdev1", 00:15:36.569 "aliases": [ 00:15:36.569 "bc790a99-cab5-492c-b6c8-6e67a1d9715b" 00:15:36.569 ], 00:15:36.569 "product_name": "Malloc disk", 00:15:36.569 "block_size": 512, 00:15:36.569 "num_blocks": 65536, 00:15:36.569 "uuid": "bc790a99-cab5-492c-b6c8-6e67a1d9715b", 00:15:36.569 "assigned_rate_limits": { 00:15:36.569 "rw_ios_per_sec": 0, 00:15:36.569 "rw_mbytes_per_sec": 0, 00:15:36.569 "r_mbytes_per_sec": 0, 00:15:36.569 "w_mbytes_per_sec": 0 00:15:36.569 }, 00:15:36.569 "claimed": true, 00:15:36.569 "claim_type": "exclusive_write", 00:15:36.569 "zoned": false, 00:15:36.569 "supported_io_types": { 00:15:36.569 "read": true, 00:15:36.569 "write": true, 00:15:36.569 "unmap": true, 00:15:36.569 "flush": true, 00:15:36.569 "reset": true, 00:15:36.569 "nvme_admin": false, 00:15:36.569 "nvme_io": false, 00:15:36.569 "nvme_io_md": false, 00:15:36.569 "write_zeroes": true, 00:15:36.569 "zcopy": true, 00:15:36.569 "get_zone_info": false, 00:15:36.569 "zone_management": false, 00:15:36.569 "zone_append": false, 00:15:36.569 "compare": false, 00:15:36.569 "compare_and_write": false, 00:15:36.569 "abort": true, 00:15:36.569 "seek_hole": false, 00:15:36.569 "seek_data": false, 00:15:36.569 "copy": true, 00:15:36.569 "nvme_iov_md": false 00:15:36.569 }, 00:15:36.569 "memory_domains": [ 00:15:36.569 { 00:15:36.569 "dma_device_id": "system", 00:15:36.569 "dma_device_type": 1 00:15:36.569 }, 00:15:36.569 { 00:15:36.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.569 "dma_device_type": 2 00:15:36.569 } 00:15:36.569 ], 00:15:36.569 "driver_specific": {} 00:15:36.569 } 00:15:36.569 ] 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.569 "name": "Existed_Raid", 00:15:36.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.569 "strip_size_kb": 64, 00:15:36.569 "state": "configuring", 00:15:36.569 "raid_level": "raid5f", 00:15:36.569 "superblock": false, 00:15:36.569 "num_base_bdevs": 3, 00:15:36.569 "num_base_bdevs_discovered": 2, 00:15:36.569 "num_base_bdevs_operational": 3, 00:15:36.569 "base_bdevs_list": [ 00:15:36.569 { 00:15:36.569 "name": "BaseBdev1", 00:15:36.569 "uuid": "bc790a99-cab5-492c-b6c8-6e67a1d9715b", 00:15:36.569 "is_configured": true, 00:15:36.569 "data_offset": 0, 00:15:36.569 "data_size": 65536 00:15:36.569 }, 00:15:36.569 { 00:15:36.569 "name": null, 00:15:36.569 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:36.569 "is_configured": false, 00:15:36.569 "data_offset": 0, 00:15:36.569 "data_size": 65536 00:15:36.569 }, 00:15:36.569 { 00:15:36.569 "name": "BaseBdev3", 00:15:36.569 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:36.569 "is_configured": true, 00:15:36.569 "data_offset": 0, 00:15:36.569 "data_size": 65536 00:15:36.569 } 00:15:36.569 ] 00:15:36.569 }' 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.569 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.828 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.828 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.828 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.828 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:36.828 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.828 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:36.828 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:36.828 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.828 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.086 [2024-12-06 18:12:48.995575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.086 18:12:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.086 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.086 18:12:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.086 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.086 "name": "Existed_Raid", 00:15:37.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.086 "strip_size_kb": 64, 00:15:37.086 "state": "configuring", 00:15:37.086 "raid_level": "raid5f", 00:15:37.086 "superblock": false, 00:15:37.086 "num_base_bdevs": 3, 00:15:37.087 "num_base_bdevs_discovered": 1, 00:15:37.087 "num_base_bdevs_operational": 3, 00:15:37.087 "base_bdevs_list": [ 00:15:37.087 { 00:15:37.087 "name": "BaseBdev1", 00:15:37.087 "uuid": "bc790a99-cab5-492c-b6c8-6e67a1d9715b", 00:15:37.087 "is_configured": true, 00:15:37.087 "data_offset": 0, 00:15:37.087 "data_size": 65536 00:15:37.087 }, 00:15:37.087 { 00:15:37.087 "name": null, 00:15:37.087 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:37.087 "is_configured": false, 00:15:37.087 "data_offset": 0, 00:15:37.087 "data_size": 65536 00:15:37.087 }, 00:15:37.087 { 00:15:37.087 "name": null, 00:15:37.087 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:37.087 "is_configured": false, 00:15:37.087 "data_offset": 0, 00:15:37.087 "data_size": 65536 00:15:37.087 } 00:15:37.087 ] 00:15:37.087 }' 00:15:37.087 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.087 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.345 [2024-12-06 18:12:49.478905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.345 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.603 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.603 "name": "Existed_Raid", 00:15:37.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.603 "strip_size_kb": 64, 00:15:37.603 "state": "configuring", 00:15:37.603 "raid_level": "raid5f", 00:15:37.603 "superblock": false, 00:15:37.603 "num_base_bdevs": 3, 00:15:37.603 "num_base_bdevs_discovered": 2, 00:15:37.603 "num_base_bdevs_operational": 3, 00:15:37.603 "base_bdevs_list": [ 00:15:37.603 { 00:15:37.603 "name": "BaseBdev1", 00:15:37.603 "uuid": "bc790a99-cab5-492c-b6c8-6e67a1d9715b", 00:15:37.603 "is_configured": true, 00:15:37.603 "data_offset": 0, 00:15:37.603 "data_size": 65536 00:15:37.603 }, 00:15:37.603 { 00:15:37.603 "name": null, 00:15:37.603 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:37.603 "is_configured": false, 00:15:37.603 "data_offset": 0, 00:15:37.603 "data_size": 65536 00:15:37.603 }, 00:15:37.603 { 00:15:37.603 "name": "BaseBdev3", 00:15:37.603 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:37.603 "is_configured": true, 00:15:37.603 "data_offset": 0, 00:15:37.603 "data_size": 65536 00:15:37.603 } 00:15:37.603 ] 00:15:37.603 }' 00:15:37.603 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.603 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.861 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.861 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.861 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.861 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.861 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.861 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:37.861 18:12:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:37.861 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.861 18:12:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.861 [2024-12-06 18:12:50.002107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.120 "name": "Existed_Raid", 00:15:38.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.120 "strip_size_kb": 64, 00:15:38.120 "state": "configuring", 00:15:38.120 "raid_level": "raid5f", 00:15:38.120 "superblock": false, 00:15:38.120 "num_base_bdevs": 3, 00:15:38.120 "num_base_bdevs_discovered": 1, 00:15:38.120 "num_base_bdevs_operational": 3, 00:15:38.120 "base_bdevs_list": [ 00:15:38.120 { 00:15:38.120 "name": null, 00:15:38.120 "uuid": "bc790a99-cab5-492c-b6c8-6e67a1d9715b", 00:15:38.120 "is_configured": false, 00:15:38.120 "data_offset": 0, 00:15:38.120 "data_size": 65536 00:15:38.120 }, 00:15:38.120 { 00:15:38.120 "name": null, 00:15:38.120 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:38.120 "is_configured": false, 00:15:38.120 "data_offset": 0, 00:15:38.120 "data_size": 65536 00:15:38.120 }, 00:15:38.120 { 00:15:38.120 "name": "BaseBdev3", 00:15:38.120 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:38.120 "is_configured": true, 00:15:38.120 "data_offset": 0, 00:15:38.120 "data_size": 65536 00:15:38.120 } 00:15:38.120 ] 00:15:38.120 }' 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.120 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.378 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:38.378 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.378 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.378 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.638 [2024-12-06 18:12:50.564294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.638 "name": "Existed_Raid", 00:15:38.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.638 "strip_size_kb": 64, 00:15:38.638 "state": "configuring", 00:15:38.638 "raid_level": "raid5f", 00:15:38.638 "superblock": false, 00:15:38.638 "num_base_bdevs": 3, 00:15:38.638 "num_base_bdevs_discovered": 2, 00:15:38.638 "num_base_bdevs_operational": 3, 00:15:38.638 "base_bdevs_list": [ 00:15:38.638 { 00:15:38.638 "name": null, 00:15:38.638 "uuid": "bc790a99-cab5-492c-b6c8-6e67a1d9715b", 00:15:38.638 "is_configured": false, 00:15:38.638 "data_offset": 0, 00:15:38.638 "data_size": 65536 00:15:38.638 }, 00:15:38.638 { 00:15:38.638 "name": "BaseBdev2", 00:15:38.638 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:38.638 "is_configured": true, 00:15:38.638 "data_offset": 0, 00:15:38.638 "data_size": 65536 00:15:38.638 }, 00:15:38.638 { 00:15:38.638 "name": "BaseBdev3", 00:15:38.638 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:38.638 "is_configured": true, 00:15:38.638 "data_offset": 0, 00:15:38.638 "data_size": 65536 00:15:38.638 } 00:15:38.638 ] 00:15:38.638 }' 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.638 18:12:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.896 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.896 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.896 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.896 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.896 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bc790a99-cab5-492c-b6c8-6e67a1d9715b 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.155 [2024-12-06 18:12:51.185758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:39.155 [2024-12-06 18:12:51.185926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:39.155 [2024-12-06 18:12:51.185959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:39.155 [2024-12-06 18:12:51.186339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:39.155 [2024-12-06 18:12:51.192789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:39.155 [2024-12-06 18:12:51.192890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:39.155 [2024-12-06 18:12:51.193346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.155 NewBaseBdev 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.155 [ 00:15:39.155 { 00:15:39.155 "name": "NewBaseBdev", 00:15:39.155 "aliases": [ 00:15:39.155 "bc790a99-cab5-492c-b6c8-6e67a1d9715b" 00:15:39.155 ], 00:15:39.155 "product_name": "Malloc disk", 00:15:39.155 "block_size": 512, 00:15:39.155 "num_blocks": 65536, 00:15:39.155 "uuid": "bc790a99-cab5-492c-b6c8-6e67a1d9715b", 00:15:39.155 "assigned_rate_limits": { 00:15:39.155 "rw_ios_per_sec": 0, 00:15:39.155 "rw_mbytes_per_sec": 0, 00:15:39.155 "r_mbytes_per_sec": 0, 00:15:39.155 "w_mbytes_per_sec": 0 00:15:39.155 }, 00:15:39.155 "claimed": true, 00:15:39.155 "claim_type": "exclusive_write", 00:15:39.155 "zoned": false, 00:15:39.155 "supported_io_types": { 00:15:39.155 "read": true, 00:15:39.155 "write": true, 00:15:39.155 "unmap": true, 00:15:39.155 "flush": true, 00:15:39.155 "reset": true, 00:15:39.155 "nvme_admin": false, 00:15:39.155 "nvme_io": false, 00:15:39.155 "nvme_io_md": false, 00:15:39.155 "write_zeroes": true, 00:15:39.155 "zcopy": true, 00:15:39.155 "get_zone_info": false, 00:15:39.155 "zone_management": false, 00:15:39.155 "zone_append": false, 00:15:39.155 "compare": false, 00:15:39.155 "compare_and_write": false, 00:15:39.155 "abort": true, 00:15:39.155 "seek_hole": false, 00:15:39.155 "seek_data": false, 00:15:39.155 "copy": true, 00:15:39.155 "nvme_iov_md": false 00:15:39.155 }, 00:15:39.155 "memory_domains": [ 00:15:39.155 { 00:15:39.155 "dma_device_id": "system", 00:15:39.155 "dma_device_type": 1 00:15:39.155 }, 00:15:39.155 { 00:15:39.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.155 "dma_device_type": 2 00:15:39.155 } 00:15:39.155 ], 00:15:39.155 "driver_specific": {} 00:15:39.155 } 00:15:39.155 ] 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.155 "name": "Existed_Raid", 00:15:39.155 "uuid": "297bbeaf-6847-4504-9692-f3d98505b968", 00:15:39.155 "strip_size_kb": 64, 00:15:39.155 "state": "online", 00:15:39.155 "raid_level": "raid5f", 00:15:39.155 "superblock": false, 00:15:39.155 "num_base_bdevs": 3, 00:15:39.155 "num_base_bdevs_discovered": 3, 00:15:39.155 "num_base_bdevs_operational": 3, 00:15:39.155 "base_bdevs_list": [ 00:15:39.155 { 00:15:39.155 "name": "NewBaseBdev", 00:15:39.155 "uuid": "bc790a99-cab5-492c-b6c8-6e67a1d9715b", 00:15:39.155 "is_configured": true, 00:15:39.155 "data_offset": 0, 00:15:39.155 "data_size": 65536 00:15:39.155 }, 00:15:39.155 { 00:15:39.155 "name": "BaseBdev2", 00:15:39.155 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:39.155 "is_configured": true, 00:15:39.155 "data_offset": 0, 00:15:39.155 "data_size": 65536 00:15:39.155 }, 00:15:39.155 { 00:15:39.155 "name": "BaseBdev3", 00:15:39.155 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:39.155 "is_configured": true, 00:15:39.155 "data_offset": 0, 00:15:39.155 "data_size": 65536 00:15:39.155 } 00:15:39.155 ] 00:15:39.155 }' 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.155 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.721 [2024-12-06 18:12:51.668526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:39.721 "name": "Existed_Raid", 00:15:39.721 "aliases": [ 00:15:39.721 "297bbeaf-6847-4504-9692-f3d98505b968" 00:15:39.721 ], 00:15:39.721 "product_name": "Raid Volume", 00:15:39.721 "block_size": 512, 00:15:39.721 "num_blocks": 131072, 00:15:39.721 "uuid": "297bbeaf-6847-4504-9692-f3d98505b968", 00:15:39.721 "assigned_rate_limits": { 00:15:39.721 "rw_ios_per_sec": 0, 00:15:39.721 "rw_mbytes_per_sec": 0, 00:15:39.721 "r_mbytes_per_sec": 0, 00:15:39.721 "w_mbytes_per_sec": 0 00:15:39.721 }, 00:15:39.721 "claimed": false, 00:15:39.721 "zoned": false, 00:15:39.721 "supported_io_types": { 00:15:39.721 "read": true, 00:15:39.721 "write": true, 00:15:39.721 "unmap": false, 00:15:39.721 "flush": false, 00:15:39.721 "reset": true, 00:15:39.721 "nvme_admin": false, 00:15:39.721 "nvme_io": false, 00:15:39.721 "nvme_io_md": false, 00:15:39.721 "write_zeroes": true, 00:15:39.721 "zcopy": false, 00:15:39.721 "get_zone_info": false, 00:15:39.721 "zone_management": false, 00:15:39.721 "zone_append": false, 00:15:39.721 "compare": false, 00:15:39.721 "compare_and_write": false, 00:15:39.721 "abort": false, 00:15:39.721 "seek_hole": false, 00:15:39.721 "seek_data": false, 00:15:39.721 "copy": false, 00:15:39.721 "nvme_iov_md": false 00:15:39.721 }, 00:15:39.721 "driver_specific": { 00:15:39.721 "raid": { 00:15:39.721 "uuid": "297bbeaf-6847-4504-9692-f3d98505b968", 00:15:39.721 "strip_size_kb": 64, 00:15:39.721 "state": "online", 00:15:39.721 "raid_level": "raid5f", 00:15:39.721 "superblock": false, 00:15:39.721 "num_base_bdevs": 3, 00:15:39.721 "num_base_bdevs_discovered": 3, 00:15:39.721 "num_base_bdevs_operational": 3, 00:15:39.721 "base_bdevs_list": [ 00:15:39.721 { 00:15:39.721 "name": "NewBaseBdev", 00:15:39.721 "uuid": "bc790a99-cab5-492c-b6c8-6e67a1d9715b", 00:15:39.721 "is_configured": true, 00:15:39.721 "data_offset": 0, 00:15:39.721 "data_size": 65536 00:15:39.721 }, 00:15:39.721 { 00:15:39.721 "name": "BaseBdev2", 00:15:39.721 "uuid": "29f83041-0a57-4851-9e87-65a1ba4047dd", 00:15:39.721 "is_configured": true, 00:15:39.721 "data_offset": 0, 00:15:39.721 "data_size": 65536 00:15:39.721 }, 00:15:39.721 { 00:15:39.721 "name": "BaseBdev3", 00:15:39.721 "uuid": "45adec1a-2677-441d-b5d8-08aa490028b4", 00:15:39.721 "is_configured": true, 00:15:39.721 "data_offset": 0, 00:15:39.721 "data_size": 65536 00:15:39.721 } 00:15:39.721 ] 00:15:39.721 } 00:15:39.721 } 00:15:39.721 }' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:39.721 BaseBdev2 00:15:39.721 BaseBdev3' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.721 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.721 [2024-12-06 18:12:51.883926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.721 [2024-12-06 18:12:51.883968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.721 [2024-12-06 18:12:51.884099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.721 [2024-12-06 18:12:51.884437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.721 [2024-12-06 18:12:51.884460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:39.979 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80431 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80431 ']' 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80431 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80431 00:15:39.980 killing process with pid 80431 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80431' 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80431 00:15:39.980 [2024-12-06 18:12:51.925659] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.980 18:12:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80431 00:15:40.239 [2024-12-06 18:12:52.275169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:41.698 00:15:41.698 real 0m11.140s 00:15:41.698 user 0m17.568s 00:15:41.698 sys 0m1.923s 00:15:41.698 ************************************ 00:15:41.698 END TEST raid5f_state_function_test 00:15:41.698 ************************************ 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.698 18:12:53 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:41.698 18:12:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:41.698 18:12:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.698 18:12:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.698 ************************************ 00:15:41.698 START TEST raid5f_state_function_test_sb 00:15:41.698 ************************************ 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81057 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81057' 00:15:41.698 Process raid pid: 81057 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81057 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81057 ']' 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.698 18:12:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.698 [2024-12-06 18:12:53.759702] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:15:41.698 [2024-12-06 18:12:53.759928] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.957 [2024-12-06 18:12:53.941594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.957 [2024-12-06 18:12:54.078848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.215 [2024-12-06 18:12:54.301606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.215 [2024-12-06 18:12:54.301658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.783 [2024-12-06 18:12:54.690142] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.783 [2024-12-06 18:12:54.690214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.783 [2024-12-06 18:12:54.690227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.783 [2024-12-06 18:12:54.690239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.783 [2024-12-06 18:12:54.690252] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.783 [2024-12-06 18:12:54.690263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.783 "name": "Existed_Raid", 00:15:42.783 "uuid": "77ea50f5-67be-4a91-ba9c-5b730c9d27bf", 00:15:42.783 "strip_size_kb": 64, 00:15:42.783 "state": "configuring", 00:15:42.783 "raid_level": "raid5f", 00:15:42.783 "superblock": true, 00:15:42.783 "num_base_bdevs": 3, 00:15:42.783 "num_base_bdevs_discovered": 0, 00:15:42.783 "num_base_bdevs_operational": 3, 00:15:42.783 "base_bdevs_list": [ 00:15:42.783 { 00:15:42.783 "name": "BaseBdev1", 00:15:42.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.783 "is_configured": false, 00:15:42.783 "data_offset": 0, 00:15:42.783 "data_size": 0 00:15:42.783 }, 00:15:42.783 { 00:15:42.783 "name": "BaseBdev2", 00:15:42.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.783 "is_configured": false, 00:15:42.783 "data_offset": 0, 00:15:42.783 "data_size": 0 00:15:42.783 }, 00:15:42.783 { 00:15:42.783 "name": "BaseBdev3", 00:15:42.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.783 "is_configured": false, 00:15:42.783 "data_offset": 0, 00:15:42.783 "data_size": 0 00:15:42.783 } 00:15:42.783 ] 00:15:42.783 }' 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.783 18:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.043 [2024-12-06 18:12:55.121340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.043 [2024-12-06 18:12:55.121462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.043 [2024-12-06 18:12:55.129350] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.043 [2024-12-06 18:12:55.129483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.043 [2024-12-06 18:12:55.129519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.043 [2024-12-06 18:12:55.129549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.043 [2024-12-06 18:12:55.129572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.043 [2024-12-06 18:12:55.129598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.043 [2024-12-06 18:12:55.181365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.043 BaseBdev1 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.043 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.044 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.302 [ 00:15:43.302 { 00:15:43.302 "name": "BaseBdev1", 00:15:43.302 "aliases": [ 00:15:43.302 "513d310b-de44-4353-84dd-1e047c773986" 00:15:43.302 ], 00:15:43.302 "product_name": "Malloc disk", 00:15:43.302 "block_size": 512, 00:15:43.302 "num_blocks": 65536, 00:15:43.302 "uuid": "513d310b-de44-4353-84dd-1e047c773986", 00:15:43.302 "assigned_rate_limits": { 00:15:43.302 "rw_ios_per_sec": 0, 00:15:43.302 "rw_mbytes_per_sec": 0, 00:15:43.302 "r_mbytes_per_sec": 0, 00:15:43.302 "w_mbytes_per_sec": 0 00:15:43.302 }, 00:15:43.302 "claimed": true, 00:15:43.302 "claim_type": "exclusive_write", 00:15:43.302 "zoned": false, 00:15:43.302 "supported_io_types": { 00:15:43.302 "read": true, 00:15:43.302 "write": true, 00:15:43.302 "unmap": true, 00:15:43.302 "flush": true, 00:15:43.302 "reset": true, 00:15:43.302 "nvme_admin": false, 00:15:43.302 "nvme_io": false, 00:15:43.302 "nvme_io_md": false, 00:15:43.302 "write_zeroes": true, 00:15:43.302 "zcopy": true, 00:15:43.302 "get_zone_info": false, 00:15:43.302 "zone_management": false, 00:15:43.302 "zone_append": false, 00:15:43.302 "compare": false, 00:15:43.302 "compare_and_write": false, 00:15:43.302 "abort": true, 00:15:43.302 "seek_hole": false, 00:15:43.302 "seek_data": false, 00:15:43.302 "copy": true, 00:15:43.302 "nvme_iov_md": false 00:15:43.302 }, 00:15:43.302 "memory_domains": [ 00:15:43.302 { 00:15:43.302 "dma_device_id": "system", 00:15:43.302 "dma_device_type": 1 00:15:43.302 }, 00:15:43.302 { 00:15:43.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.302 "dma_device_type": 2 00:15:43.302 } 00:15:43.302 ], 00:15:43.302 "driver_specific": {} 00:15:43.302 } 00:15:43.302 ] 00:15:43.302 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.302 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:43.302 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.302 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.302 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.302 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.303 "name": "Existed_Raid", 00:15:43.303 "uuid": "de4da754-0c33-4fae-9e1b-f335e37a4784", 00:15:43.303 "strip_size_kb": 64, 00:15:43.303 "state": "configuring", 00:15:43.303 "raid_level": "raid5f", 00:15:43.303 "superblock": true, 00:15:43.303 "num_base_bdevs": 3, 00:15:43.303 "num_base_bdevs_discovered": 1, 00:15:43.303 "num_base_bdevs_operational": 3, 00:15:43.303 "base_bdevs_list": [ 00:15:43.303 { 00:15:43.303 "name": "BaseBdev1", 00:15:43.303 "uuid": "513d310b-de44-4353-84dd-1e047c773986", 00:15:43.303 "is_configured": true, 00:15:43.303 "data_offset": 2048, 00:15:43.303 "data_size": 63488 00:15:43.303 }, 00:15:43.303 { 00:15:43.303 "name": "BaseBdev2", 00:15:43.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.303 "is_configured": false, 00:15:43.303 "data_offset": 0, 00:15:43.303 "data_size": 0 00:15:43.303 }, 00:15:43.303 { 00:15:43.303 "name": "BaseBdev3", 00:15:43.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.303 "is_configured": false, 00:15:43.303 "data_offset": 0, 00:15:43.303 "data_size": 0 00:15:43.303 } 00:15:43.303 ] 00:15:43.303 }' 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.303 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.562 [2024-12-06 18:12:55.700587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.562 [2024-12-06 18:12:55.700656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.562 [2024-12-06 18:12:55.712666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.562 [2024-12-06 18:12:55.714845] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.562 [2024-12-06 18:12:55.714963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.562 [2024-12-06 18:12:55.715001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.562 [2024-12-06 18:12:55.715029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.562 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.821 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.821 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.821 "name": "Existed_Raid", 00:15:43.821 "uuid": "b34f7b90-7bab-480d-b150-6dff5182f16a", 00:15:43.821 "strip_size_kb": 64, 00:15:43.821 "state": "configuring", 00:15:43.821 "raid_level": "raid5f", 00:15:43.821 "superblock": true, 00:15:43.821 "num_base_bdevs": 3, 00:15:43.821 "num_base_bdevs_discovered": 1, 00:15:43.821 "num_base_bdevs_operational": 3, 00:15:43.821 "base_bdevs_list": [ 00:15:43.821 { 00:15:43.821 "name": "BaseBdev1", 00:15:43.821 "uuid": "513d310b-de44-4353-84dd-1e047c773986", 00:15:43.821 "is_configured": true, 00:15:43.821 "data_offset": 2048, 00:15:43.821 "data_size": 63488 00:15:43.821 }, 00:15:43.821 { 00:15:43.821 "name": "BaseBdev2", 00:15:43.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.821 "is_configured": false, 00:15:43.821 "data_offset": 0, 00:15:43.821 "data_size": 0 00:15:43.821 }, 00:15:43.821 { 00:15:43.821 "name": "BaseBdev3", 00:15:43.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.821 "is_configured": false, 00:15:43.821 "data_offset": 0, 00:15:43.821 "data_size": 0 00:15:43.821 } 00:15:43.821 ] 00:15:43.821 }' 00:15:43.821 18:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.821 18:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.080 [2024-12-06 18:12:56.240802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.080 BaseBdev2 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.080 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.340 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.341 [ 00:15:44.341 { 00:15:44.341 "name": "BaseBdev2", 00:15:44.341 "aliases": [ 00:15:44.341 "5709f779-da14-4fad-9cec-1c36fddcae07" 00:15:44.341 ], 00:15:44.341 "product_name": "Malloc disk", 00:15:44.341 "block_size": 512, 00:15:44.341 "num_blocks": 65536, 00:15:44.341 "uuid": "5709f779-da14-4fad-9cec-1c36fddcae07", 00:15:44.341 "assigned_rate_limits": { 00:15:44.341 "rw_ios_per_sec": 0, 00:15:44.341 "rw_mbytes_per_sec": 0, 00:15:44.341 "r_mbytes_per_sec": 0, 00:15:44.341 "w_mbytes_per_sec": 0 00:15:44.341 }, 00:15:44.341 "claimed": true, 00:15:44.341 "claim_type": "exclusive_write", 00:15:44.341 "zoned": false, 00:15:44.341 "supported_io_types": { 00:15:44.341 "read": true, 00:15:44.341 "write": true, 00:15:44.341 "unmap": true, 00:15:44.341 "flush": true, 00:15:44.341 "reset": true, 00:15:44.341 "nvme_admin": false, 00:15:44.341 "nvme_io": false, 00:15:44.341 "nvme_io_md": false, 00:15:44.341 "write_zeroes": true, 00:15:44.341 "zcopy": true, 00:15:44.341 "get_zone_info": false, 00:15:44.341 "zone_management": false, 00:15:44.341 "zone_append": false, 00:15:44.341 "compare": false, 00:15:44.341 "compare_and_write": false, 00:15:44.341 "abort": true, 00:15:44.341 "seek_hole": false, 00:15:44.341 "seek_data": false, 00:15:44.341 "copy": true, 00:15:44.341 "nvme_iov_md": false 00:15:44.341 }, 00:15:44.341 "memory_domains": [ 00:15:44.341 { 00:15:44.341 "dma_device_id": "system", 00:15:44.341 "dma_device_type": 1 00:15:44.341 }, 00:15:44.341 { 00:15:44.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.341 "dma_device_type": 2 00:15:44.341 } 00:15:44.341 ], 00:15:44.341 "driver_specific": {} 00:15:44.341 } 00:15:44.341 ] 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.341 "name": "Existed_Raid", 00:15:44.341 "uuid": "b34f7b90-7bab-480d-b150-6dff5182f16a", 00:15:44.341 "strip_size_kb": 64, 00:15:44.341 "state": "configuring", 00:15:44.341 "raid_level": "raid5f", 00:15:44.341 "superblock": true, 00:15:44.341 "num_base_bdevs": 3, 00:15:44.341 "num_base_bdevs_discovered": 2, 00:15:44.341 "num_base_bdevs_operational": 3, 00:15:44.341 "base_bdevs_list": [ 00:15:44.341 { 00:15:44.341 "name": "BaseBdev1", 00:15:44.341 "uuid": "513d310b-de44-4353-84dd-1e047c773986", 00:15:44.341 "is_configured": true, 00:15:44.341 "data_offset": 2048, 00:15:44.341 "data_size": 63488 00:15:44.341 }, 00:15:44.341 { 00:15:44.341 "name": "BaseBdev2", 00:15:44.341 "uuid": "5709f779-da14-4fad-9cec-1c36fddcae07", 00:15:44.341 "is_configured": true, 00:15:44.341 "data_offset": 2048, 00:15:44.341 "data_size": 63488 00:15:44.341 }, 00:15:44.341 { 00:15:44.341 "name": "BaseBdev3", 00:15:44.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.341 "is_configured": false, 00:15:44.341 "data_offset": 0, 00:15:44.341 "data_size": 0 00:15:44.341 } 00:15:44.341 ] 00:15:44.341 }' 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.341 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.600 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.600 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.600 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.859 [2024-12-06 18:12:56.777855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.859 [2024-12-06 18:12:56.778215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:44.859 [2024-12-06 18:12:56.778240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:44.859 [2024-12-06 18:12:56.778560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:44.859 BaseBdev3 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.859 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.860 [2024-12-06 18:12:56.785329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:44.860 [2024-12-06 18:12:56.785415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:44.860 [2024-12-06 18:12:56.785831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.860 [ 00:15:44.860 { 00:15:44.860 "name": "BaseBdev3", 00:15:44.860 "aliases": [ 00:15:44.860 "fc59b8e8-393e-4969-a18a-4db8480d7da5" 00:15:44.860 ], 00:15:44.860 "product_name": "Malloc disk", 00:15:44.860 "block_size": 512, 00:15:44.860 "num_blocks": 65536, 00:15:44.860 "uuid": "fc59b8e8-393e-4969-a18a-4db8480d7da5", 00:15:44.860 "assigned_rate_limits": { 00:15:44.860 "rw_ios_per_sec": 0, 00:15:44.860 "rw_mbytes_per_sec": 0, 00:15:44.860 "r_mbytes_per_sec": 0, 00:15:44.860 "w_mbytes_per_sec": 0 00:15:44.860 }, 00:15:44.860 "claimed": true, 00:15:44.860 "claim_type": "exclusive_write", 00:15:44.860 "zoned": false, 00:15:44.860 "supported_io_types": { 00:15:44.860 "read": true, 00:15:44.860 "write": true, 00:15:44.860 "unmap": true, 00:15:44.860 "flush": true, 00:15:44.860 "reset": true, 00:15:44.860 "nvme_admin": false, 00:15:44.860 "nvme_io": false, 00:15:44.860 "nvme_io_md": false, 00:15:44.860 "write_zeroes": true, 00:15:44.860 "zcopy": true, 00:15:44.860 "get_zone_info": false, 00:15:44.860 "zone_management": false, 00:15:44.860 "zone_append": false, 00:15:44.860 "compare": false, 00:15:44.860 "compare_and_write": false, 00:15:44.860 "abort": true, 00:15:44.860 "seek_hole": false, 00:15:44.860 "seek_data": false, 00:15:44.860 "copy": true, 00:15:44.860 "nvme_iov_md": false 00:15:44.860 }, 00:15:44.860 "memory_domains": [ 00:15:44.860 { 00:15:44.860 "dma_device_id": "system", 00:15:44.860 "dma_device_type": 1 00:15:44.860 }, 00:15:44.860 { 00:15:44.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.860 "dma_device_type": 2 00:15:44.860 } 00:15:44.860 ], 00:15:44.860 "driver_specific": {} 00:15:44.860 } 00:15:44.860 ] 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.860 "name": "Existed_Raid", 00:15:44.860 "uuid": "b34f7b90-7bab-480d-b150-6dff5182f16a", 00:15:44.860 "strip_size_kb": 64, 00:15:44.860 "state": "online", 00:15:44.860 "raid_level": "raid5f", 00:15:44.860 "superblock": true, 00:15:44.860 "num_base_bdevs": 3, 00:15:44.860 "num_base_bdevs_discovered": 3, 00:15:44.860 "num_base_bdevs_operational": 3, 00:15:44.860 "base_bdevs_list": [ 00:15:44.860 { 00:15:44.860 "name": "BaseBdev1", 00:15:44.860 "uuid": "513d310b-de44-4353-84dd-1e047c773986", 00:15:44.860 "is_configured": true, 00:15:44.860 "data_offset": 2048, 00:15:44.860 "data_size": 63488 00:15:44.860 }, 00:15:44.860 { 00:15:44.860 "name": "BaseBdev2", 00:15:44.860 "uuid": "5709f779-da14-4fad-9cec-1c36fddcae07", 00:15:44.860 "is_configured": true, 00:15:44.860 "data_offset": 2048, 00:15:44.860 "data_size": 63488 00:15:44.860 }, 00:15:44.860 { 00:15:44.860 "name": "BaseBdev3", 00:15:44.860 "uuid": "fc59b8e8-393e-4969-a18a-4db8480d7da5", 00:15:44.860 "is_configured": true, 00:15:44.860 "data_offset": 2048, 00:15:44.860 "data_size": 63488 00:15:44.860 } 00:15:44.860 ] 00:15:44.860 }' 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.860 18:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.119 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.119 [2024-12-06 18:12:57.277110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.377 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.377 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.377 "name": "Existed_Raid", 00:15:45.377 "aliases": [ 00:15:45.377 "b34f7b90-7bab-480d-b150-6dff5182f16a" 00:15:45.377 ], 00:15:45.377 "product_name": "Raid Volume", 00:15:45.377 "block_size": 512, 00:15:45.377 "num_blocks": 126976, 00:15:45.377 "uuid": "b34f7b90-7bab-480d-b150-6dff5182f16a", 00:15:45.377 "assigned_rate_limits": { 00:15:45.377 "rw_ios_per_sec": 0, 00:15:45.377 "rw_mbytes_per_sec": 0, 00:15:45.377 "r_mbytes_per_sec": 0, 00:15:45.377 "w_mbytes_per_sec": 0 00:15:45.377 }, 00:15:45.377 "claimed": false, 00:15:45.377 "zoned": false, 00:15:45.377 "supported_io_types": { 00:15:45.377 "read": true, 00:15:45.377 "write": true, 00:15:45.377 "unmap": false, 00:15:45.377 "flush": false, 00:15:45.377 "reset": true, 00:15:45.377 "nvme_admin": false, 00:15:45.377 "nvme_io": false, 00:15:45.377 "nvme_io_md": false, 00:15:45.377 "write_zeroes": true, 00:15:45.377 "zcopy": false, 00:15:45.377 "get_zone_info": false, 00:15:45.377 "zone_management": false, 00:15:45.377 "zone_append": false, 00:15:45.377 "compare": false, 00:15:45.377 "compare_and_write": false, 00:15:45.377 "abort": false, 00:15:45.377 "seek_hole": false, 00:15:45.377 "seek_data": false, 00:15:45.377 "copy": false, 00:15:45.377 "nvme_iov_md": false 00:15:45.377 }, 00:15:45.377 "driver_specific": { 00:15:45.377 "raid": { 00:15:45.377 "uuid": "b34f7b90-7bab-480d-b150-6dff5182f16a", 00:15:45.377 "strip_size_kb": 64, 00:15:45.377 "state": "online", 00:15:45.377 "raid_level": "raid5f", 00:15:45.377 "superblock": true, 00:15:45.377 "num_base_bdevs": 3, 00:15:45.377 "num_base_bdevs_discovered": 3, 00:15:45.377 "num_base_bdevs_operational": 3, 00:15:45.377 "base_bdevs_list": [ 00:15:45.377 { 00:15:45.377 "name": "BaseBdev1", 00:15:45.377 "uuid": "513d310b-de44-4353-84dd-1e047c773986", 00:15:45.377 "is_configured": true, 00:15:45.377 "data_offset": 2048, 00:15:45.377 "data_size": 63488 00:15:45.377 }, 00:15:45.377 { 00:15:45.377 "name": "BaseBdev2", 00:15:45.377 "uuid": "5709f779-da14-4fad-9cec-1c36fddcae07", 00:15:45.377 "is_configured": true, 00:15:45.377 "data_offset": 2048, 00:15:45.377 "data_size": 63488 00:15:45.377 }, 00:15:45.377 { 00:15:45.377 "name": "BaseBdev3", 00:15:45.377 "uuid": "fc59b8e8-393e-4969-a18a-4db8480d7da5", 00:15:45.377 "is_configured": true, 00:15:45.377 "data_offset": 2048, 00:15:45.377 "data_size": 63488 00:15:45.377 } 00:15:45.377 ] 00:15:45.377 } 00:15:45.377 } 00:15:45.377 }' 00:15:45.377 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.377 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:45.378 BaseBdev2 00:15:45.378 BaseBdev3' 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.378 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.378 [2024-12-06 18:12:57.524498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.637 "name": "Existed_Raid", 00:15:45.637 "uuid": "b34f7b90-7bab-480d-b150-6dff5182f16a", 00:15:45.637 "strip_size_kb": 64, 00:15:45.637 "state": "online", 00:15:45.637 "raid_level": "raid5f", 00:15:45.637 "superblock": true, 00:15:45.637 "num_base_bdevs": 3, 00:15:45.637 "num_base_bdevs_discovered": 2, 00:15:45.637 "num_base_bdevs_operational": 2, 00:15:45.637 "base_bdevs_list": [ 00:15:45.637 { 00:15:45.637 "name": null, 00:15:45.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.637 "is_configured": false, 00:15:45.637 "data_offset": 0, 00:15:45.637 "data_size": 63488 00:15:45.637 }, 00:15:45.637 { 00:15:45.637 "name": "BaseBdev2", 00:15:45.637 "uuid": "5709f779-da14-4fad-9cec-1c36fddcae07", 00:15:45.637 "is_configured": true, 00:15:45.637 "data_offset": 2048, 00:15:45.637 "data_size": 63488 00:15:45.637 }, 00:15:45.637 { 00:15:45.637 "name": "BaseBdev3", 00:15:45.637 "uuid": "fc59b8e8-393e-4969-a18a-4db8480d7da5", 00:15:45.637 "is_configured": true, 00:15:45.637 "data_offset": 2048, 00:15:45.637 "data_size": 63488 00:15:45.637 } 00:15:45.637 ] 00:15:45.637 }' 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.637 18:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 [2024-12-06 18:12:58.154952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.205 [2024-12-06 18:12:58.155161] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.205 [2024-12-06 18:12:58.269388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.205 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 [2024-12-06 18:12:58.333376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:46.205 [2024-12-06 18:12:58.333447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.465 BaseBdev2 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.465 [ 00:15:46.465 { 00:15:46.465 "name": "BaseBdev2", 00:15:46.465 "aliases": [ 00:15:46.465 "3f3e393c-5cdf-44ea-be85-6b9a4a50e828" 00:15:46.465 ], 00:15:46.465 "product_name": "Malloc disk", 00:15:46.465 "block_size": 512, 00:15:46.465 "num_blocks": 65536, 00:15:46.465 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:46.465 "assigned_rate_limits": { 00:15:46.465 "rw_ios_per_sec": 0, 00:15:46.465 "rw_mbytes_per_sec": 0, 00:15:46.465 "r_mbytes_per_sec": 0, 00:15:46.465 "w_mbytes_per_sec": 0 00:15:46.465 }, 00:15:46.465 "claimed": false, 00:15:46.465 "zoned": false, 00:15:46.465 "supported_io_types": { 00:15:46.465 "read": true, 00:15:46.465 "write": true, 00:15:46.465 "unmap": true, 00:15:46.465 "flush": true, 00:15:46.465 "reset": true, 00:15:46.465 "nvme_admin": false, 00:15:46.465 "nvme_io": false, 00:15:46.465 "nvme_io_md": false, 00:15:46.465 "write_zeroes": true, 00:15:46.465 "zcopy": true, 00:15:46.465 "get_zone_info": false, 00:15:46.465 "zone_management": false, 00:15:46.465 "zone_append": false, 00:15:46.465 "compare": false, 00:15:46.465 "compare_and_write": false, 00:15:46.465 "abort": true, 00:15:46.465 "seek_hole": false, 00:15:46.465 "seek_data": false, 00:15:46.465 "copy": true, 00:15:46.465 "nvme_iov_md": false 00:15:46.465 }, 00:15:46.465 "memory_domains": [ 00:15:46.465 { 00:15:46.465 "dma_device_id": "system", 00:15:46.465 "dma_device_type": 1 00:15:46.465 }, 00:15:46.465 { 00:15:46.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.465 "dma_device_type": 2 00:15:46.465 } 00:15:46.465 ], 00:15:46.465 "driver_specific": {} 00:15:46.465 } 00:15:46.465 ] 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.465 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.725 BaseBdev3 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.725 [ 00:15:46.725 { 00:15:46.725 "name": "BaseBdev3", 00:15:46.725 "aliases": [ 00:15:46.725 "330694d4-94b5-4662-a099-a7d9e767c66b" 00:15:46.725 ], 00:15:46.725 "product_name": "Malloc disk", 00:15:46.725 "block_size": 512, 00:15:46.725 "num_blocks": 65536, 00:15:46.725 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:46.725 "assigned_rate_limits": { 00:15:46.725 "rw_ios_per_sec": 0, 00:15:46.725 "rw_mbytes_per_sec": 0, 00:15:46.725 "r_mbytes_per_sec": 0, 00:15:46.725 "w_mbytes_per_sec": 0 00:15:46.725 }, 00:15:46.725 "claimed": false, 00:15:46.725 "zoned": false, 00:15:46.725 "supported_io_types": { 00:15:46.725 "read": true, 00:15:46.725 "write": true, 00:15:46.725 "unmap": true, 00:15:46.725 "flush": true, 00:15:46.725 "reset": true, 00:15:46.725 "nvme_admin": false, 00:15:46.725 "nvme_io": false, 00:15:46.725 "nvme_io_md": false, 00:15:46.725 "write_zeroes": true, 00:15:46.725 "zcopy": true, 00:15:46.725 "get_zone_info": false, 00:15:46.725 "zone_management": false, 00:15:46.725 "zone_append": false, 00:15:46.725 "compare": false, 00:15:46.725 "compare_and_write": false, 00:15:46.725 "abort": true, 00:15:46.725 "seek_hole": false, 00:15:46.725 "seek_data": false, 00:15:46.725 "copy": true, 00:15:46.725 "nvme_iov_md": false 00:15:46.725 }, 00:15:46.725 "memory_domains": [ 00:15:46.725 { 00:15:46.725 "dma_device_id": "system", 00:15:46.725 "dma_device_type": 1 00:15:46.725 }, 00:15:46.725 { 00:15:46.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.725 "dma_device_type": 2 00:15:46.725 } 00:15:46.725 ], 00:15:46.725 "driver_specific": {} 00:15:46.725 } 00:15:46.725 ] 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.725 [2024-12-06 18:12:58.690681] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.725 [2024-12-06 18:12:58.690845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.725 [2024-12-06 18:12:58.690911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.725 [2024-12-06 18:12:58.693127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.725 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.725 "name": "Existed_Raid", 00:15:46.725 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:46.726 "strip_size_kb": 64, 00:15:46.726 "state": "configuring", 00:15:46.726 "raid_level": "raid5f", 00:15:46.726 "superblock": true, 00:15:46.726 "num_base_bdevs": 3, 00:15:46.726 "num_base_bdevs_discovered": 2, 00:15:46.726 "num_base_bdevs_operational": 3, 00:15:46.726 "base_bdevs_list": [ 00:15:46.726 { 00:15:46.726 "name": "BaseBdev1", 00:15:46.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.726 "is_configured": false, 00:15:46.726 "data_offset": 0, 00:15:46.726 "data_size": 0 00:15:46.726 }, 00:15:46.726 { 00:15:46.726 "name": "BaseBdev2", 00:15:46.726 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:46.726 "is_configured": true, 00:15:46.726 "data_offset": 2048, 00:15:46.726 "data_size": 63488 00:15:46.726 }, 00:15:46.726 { 00:15:46.726 "name": "BaseBdev3", 00:15:46.726 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:46.726 "is_configured": true, 00:15:46.726 "data_offset": 2048, 00:15:46.726 "data_size": 63488 00:15:46.726 } 00:15:46.726 ] 00:15:46.726 }' 00:15:46.726 18:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.726 18:12:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.985 [2024-12-06 18:12:59.122594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.985 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.244 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.244 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.244 "name": "Existed_Raid", 00:15:47.244 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:47.244 "strip_size_kb": 64, 00:15:47.244 "state": "configuring", 00:15:47.244 "raid_level": "raid5f", 00:15:47.244 "superblock": true, 00:15:47.244 "num_base_bdevs": 3, 00:15:47.244 "num_base_bdevs_discovered": 1, 00:15:47.244 "num_base_bdevs_operational": 3, 00:15:47.244 "base_bdevs_list": [ 00:15:47.244 { 00:15:47.244 "name": "BaseBdev1", 00:15:47.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.244 "is_configured": false, 00:15:47.244 "data_offset": 0, 00:15:47.244 "data_size": 0 00:15:47.244 }, 00:15:47.244 { 00:15:47.244 "name": null, 00:15:47.244 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:47.244 "is_configured": false, 00:15:47.244 "data_offset": 0, 00:15:47.244 "data_size": 63488 00:15:47.244 }, 00:15:47.244 { 00:15:47.244 "name": "BaseBdev3", 00:15:47.244 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:47.244 "is_configured": true, 00:15:47.244 "data_offset": 2048, 00:15:47.244 "data_size": 63488 00:15:47.244 } 00:15:47.244 ] 00:15:47.244 }' 00:15:47.244 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.244 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.503 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.503 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:47.503 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.503 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.503 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.503 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:47.503 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.503 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.503 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.773 [2024-12-06 18:12:59.671956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.773 BaseBdev1 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.773 [ 00:15:47.773 { 00:15:47.773 "name": "BaseBdev1", 00:15:47.773 "aliases": [ 00:15:47.773 "7c4d591f-2b9d-4741-955a-bd5d9267ff62" 00:15:47.773 ], 00:15:47.773 "product_name": "Malloc disk", 00:15:47.773 "block_size": 512, 00:15:47.773 "num_blocks": 65536, 00:15:47.773 "uuid": "7c4d591f-2b9d-4741-955a-bd5d9267ff62", 00:15:47.773 "assigned_rate_limits": { 00:15:47.773 "rw_ios_per_sec": 0, 00:15:47.773 "rw_mbytes_per_sec": 0, 00:15:47.773 "r_mbytes_per_sec": 0, 00:15:47.773 "w_mbytes_per_sec": 0 00:15:47.773 }, 00:15:47.773 "claimed": true, 00:15:47.773 "claim_type": "exclusive_write", 00:15:47.773 "zoned": false, 00:15:47.773 "supported_io_types": { 00:15:47.773 "read": true, 00:15:47.773 "write": true, 00:15:47.773 "unmap": true, 00:15:47.773 "flush": true, 00:15:47.773 "reset": true, 00:15:47.773 "nvme_admin": false, 00:15:47.773 "nvme_io": false, 00:15:47.773 "nvme_io_md": false, 00:15:47.773 "write_zeroes": true, 00:15:47.773 "zcopy": true, 00:15:47.773 "get_zone_info": false, 00:15:47.773 "zone_management": false, 00:15:47.773 "zone_append": false, 00:15:47.773 "compare": false, 00:15:47.773 "compare_and_write": false, 00:15:47.773 "abort": true, 00:15:47.773 "seek_hole": false, 00:15:47.773 "seek_data": false, 00:15:47.773 "copy": true, 00:15:47.773 "nvme_iov_md": false 00:15:47.773 }, 00:15:47.773 "memory_domains": [ 00:15:47.773 { 00:15:47.773 "dma_device_id": "system", 00:15:47.773 "dma_device_type": 1 00:15:47.773 }, 00:15:47.773 { 00:15:47.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.773 "dma_device_type": 2 00:15:47.773 } 00:15:47.773 ], 00:15:47.773 "driver_specific": {} 00:15:47.773 } 00:15:47.773 ] 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.773 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.774 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.774 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.774 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.774 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.774 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.774 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.774 "name": "Existed_Raid", 00:15:47.774 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:47.774 "strip_size_kb": 64, 00:15:47.774 "state": "configuring", 00:15:47.774 "raid_level": "raid5f", 00:15:47.774 "superblock": true, 00:15:47.774 "num_base_bdevs": 3, 00:15:47.774 "num_base_bdevs_discovered": 2, 00:15:47.774 "num_base_bdevs_operational": 3, 00:15:47.774 "base_bdevs_list": [ 00:15:47.774 { 00:15:47.774 "name": "BaseBdev1", 00:15:47.774 "uuid": "7c4d591f-2b9d-4741-955a-bd5d9267ff62", 00:15:47.774 "is_configured": true, 00:15:47.774 "data_offset": 2048, 00:15:47.774 "data_size": 63488 00:15:47.774 }, 00:15:47.774 { 00:15:47.774 "name": null, 00:15:47.774 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:47.774 "is_configured": false, 00:15:47.774 "data_offset": 0, 00:15:47.774 "data_size": 63488 00:15:47.774 }, 00:15:47.774 { 00:15:47.774 "name": "BaseBdev3", 00:15:47.774 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:47.774 "is_configured": true, 00:15:47.774 "data_offset": 2048, 00:15:47.774 "data_size": 63488 00:15:47.774 } 00:15:47.774 ] 00:15:47.774 }' 00:15:47.774 18:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.774 18:12:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.341 [2024-12-06 18:13:00.239178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.341 "name": "Existed_Raid", 00:15:48.341 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:48.341 "strip_size_kb": 64, 00:15:48.341 "state": "configuring", 00:15:48.341 "raid_level": "raid5f", 00:15:48.341 "superblock": true, 00:15:48.341 "num_base_bdevs": 3, 00:15:48.341 "num_base_bdevs_discovered": 1, 00:15:48.341 "num_base_bdevs_operational": 3, 00:15:48.341 "base_bdevs_list": [ 00:15:48.341 { 00:15:48.341 "name": "BaseBdev1", 00:15:48.341 "uuid": "7c4d591f-2b9d-4741-955a-bd5d9267ff62", 00:15:48.341 "is_configured": true, 00:15:48.341 "data_offset": 2048, 00:15:48.341 "data_size": 63488 00:15:48.341 }, 00:15:48.341 { 00:15:48.341 "name": null, 00:15:48.341 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:48.341 "is_configured": false, 00:15:48.341 "data_offset": 0, 00:15:48.341 "data_size": 63488 00:15:48.341 }, 00:15:48.341 { 00:15:48.341 "name": null, 00:15:48.341 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:48.341 "is_configured": false, 00:15:48.341 "data_offset": 0, 00:15:48.341 "data_size": 63488 00:15:48.341 } 00:15:48.341 ] 00:15:48.341 }' 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.341 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.600 [2024-12-06 18:13:00.726375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.600 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.859 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.859 "name": "Existed_Raid", 00:15:48.859 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:48.859 "strip_size_kb": 64, 00:15:48.859 "state": "configuring", 00:15:48.859 "raid_level": "raid5f", 00:15:48.859 "superblock": true, 00:15:48.859 "num_base_bdevs": 3, 00:15:48.859 "num_base_bdevs_discovered": 2, 00:15:48.859 "num_base_bdevs_operational": 3, 00:15:48.859 "base_bdevs_list": [ 00:15:48.859 { 00:15:48.859 "name": "BaseBdev1", 00:15:48.859 "uuid": "7c4d591f-2b9d-4741-955a-bd5d9267ff62", 00:15:48.859 "is_configured": true, 00:15:48.859 "data_offset": 2048, 00:15:48.859 "data_size": 63488 00:15:48.859 }, 00:15:48.859 { 00:15:48.859 "name": null, 00:15:48.859 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:48.859 "is_configured": false, 00:15:48.859 "data_offset": 0, 00:15:48.859 "data_size": 63488 00:15:48.859 }, 00:15:48.859 { 00:15:48.859 "name": "BaseBdev3", 00:15:48.859 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:48.859 "is_configured": true, 00:15:48.859 "data_offset": 2048, 00:15:48.859 "data_size": 63488 00:15:48.859 } 00:15:48.859 ] 00:15:48.859 }' 00:15:48.859 18:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.859 18:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.119 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.119 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.119 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.119 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:49.119 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.119 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:49.119 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:49.119 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.119 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.119 [2024-12-06 18:13:01.237513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.377 "name": "Existed_Raid", 00:15:49.377 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:49.377 "strip_size_kb": 64, 00:15:49.377 "state": "configuring", 00:15:49.377 "raid_level": "raid5f", 00:15:49.377 "superblock": true, 00:15:49.377 "num_base_bdevs": 3, 00:15:49.377 "num_base_bdevs_discovered": 1, 00:15:49.377 "num_base_bdevs_operational": 3, 00:15:49.377 "base_bdevs_list": [ 00:15:49.377 { 00:15:49.377 "name": null, 00:15:49.377 "uuid": "7c4d591f-2b9d-4741-955a-bd5d9267ff62", 00:15:49.377 "is_configured": false, 00:15:49.377 "data_offset": 0, 00:15:49.377 "data_size": 63488 00:15:49.377 }, 00:15:49.377 { 00:15:49.377 "name": null, 00:15:49.377 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:49.377 "is_configured": false, 00:15:49.377 "data_offset": 0, 00:15:49.377 "data_size": 63488 00:15:49.377 }, 00:15:49.377 { 00:15:49.377 "name": "BaseBdev3", 00:15:49.377 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:49.377 "is_configured": true, 00:15:49.377 "data_offset": 2048, 00:15:49.377 "data_size": 63488 00:15:49.377 } 00:15:49.377 ] 00:15:49.377 }' 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.377 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.944 [2024-12-06 18:13:01.865968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.944 "name": "Existed_Raid", 00:15:49.944 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:49.944 "strip_size_kb": 64, 00:15:49.944 "state": "configuring", 00:15:49.944 "raid_level": "raid5f", 00:15:49.944 "superblock": true, 00:15:49.944 "num_base_bdevs": 3, 00:15:49.944 "num_base_bdevs_discovered": 2, 00:15:49.944 "num_base_bdevs_operational": 3, 00:15:49.944 "base_bdevs_list": [ 00:15:49.944 { 00:15:49.944 "name": null, 00:15:49.944 "uuid": "7c4d591f-2b9d-4741-955a-bd5d9267ff62", 00:15:49.944 "is_configured": false, 00:15:49.944 "data_offset": 0, 00:15:49.944 "data_size": 63488 00:15:49.944 }, 00:15:49.944 { 00:15:49.944 "name": "BaseBdev2", 00:15:49.944 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:49.944 "is_configured": true, 00:15:49.944 "data_offset": 2048, 00:15:49.944 "data_size": 63488 00:15:49.944 }, 00:15:49.944 { 00:15:49.944 "name": "BaseBdev3", 00:15:49.944 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:49.944 "is_configured": true, 00:15:49.944 "data_offset": 2048, 00:15:49.944 "data_size": 63488 00:15:49.944 } 00:15:49.944 ] 00:15:49.944 }' 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.944 18:13:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.205 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.205 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.205 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.205 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7c4d591f-2b9d-4741-955a-bd5d9267ff62 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.464 [2024-12-06 18:13:02.511353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:50.464 [2024-12-06 18:13:02.511647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:50.464 [2024-12-06 18:13:02.511667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:50.464 [2024-12-06 18:13:02.511954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:50.464 NewBaseBdev 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.464 [2024-12-06 18:13:02.518848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:50.464 [2024-12-06 18:13:02.518949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:50.464 [2024-12-06 18:13:02.519420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.464 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.464 [ 00:15:50.464 { 00:15:50.464 "name": "NewBaseBdev", 00:15:50.465 "aliases": [ 00:15:50.465 "7c4d591f-2b9d-4741-955a-bd5d9267ff62" 00:15:50.465 ], 00:15:50.465 "product_name": "Malloc disk", 00:15:50.465 "block_size": 512, 00:15:50.465 "num_blocks": 65536, 00:15:50.465 "uuid": "7c4d591f-2b9d-4741-955a-bd5d9267ff62", 00:15:50.465 "assigned_rate_limits": { 00:15:50.465 "rw_ios_per_sec": 0, 00:15:50.465 "rw_mbytes_per_sec": 0, 00:15:50.465 "r_mbytes_per_sec": 0, 00:15:50.465 "w_mbytes_per_sec": 0 00:15:50.465 }, 00:15:50.465 "claimed": true, 00:15:50.465 "claim_type": "exclusive_write", 00:15:50.465 "zoned": false, 00:15:50.465 "supported_io_types": { 00:15:50.465 "read": true, 00:15:50.465 "write": true, 00:15:50.465 "unmap": true, 00:15:50.465 "flush": true, 00:15:50.465 "reset": true, 00:15:50.465 "nvme_admin": false, 00:15:50.465 "nvme_io": false, 00:15:50.465 "nvme_io_md": false, 00:15:50.465 "write_zeroes": true, 00:15:50.465 "zcopy": true, 00:15:50.465 "get_zone_info": false, 00:15:50.465 "zone_management": false, 00:15:50.465 "zone_append": false, 00:15:50.465 "compare": false, 00:15:50.465 "compare_and_write": false, 00:15:50.465 "abort": true, 00:15:50.465 "seek_hole": false, 00:15:50.465 "seek_data": false, 00:15:50.465 "copy": true, 00:15:50.465 "nvme_iov_md": false 00:15:50.465 }, 00:15:50.465 "memory_domains": [ 00:15:50.465 { 00:15:50.465 "dma_device_id": "system", 00:15:50.465 "dma_device_type": 1 00:15:50.465 }, 00:15:50.465 { 00:15:50.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.465 "dma_device_type": 2 00:15:50.465 } 00:15:50.465 ], 00:15:50.465 "driver_specific": {} 00:15:50.465 } 00:15:50.465 ] 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.465 "name": "Existed_Raid", 00:15:50.465 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:50.465 "strip_size_kb": 64, 00:15:50.465 "state": "online", 00:15:50.465 "raid_level": "raid5f", 00:15:50.465 "superblock": true, 00:15:50.465 "num_base_bdevs": 3, 00:15:50.465 "num_base_bdevs_discovered": 3, 00:15:50.465 "num_base_bdevs_operational": 3, 00:15:50.465 "base_bdevs_list": [ 00:15:50.465 { 00:15:50.465 "name": "NewBaseBdev", 00:15:50.465 "uuid": "7c4d591f-2b9d-4741-955a-bd5d9267ff62", 00:15:50.465 "is_configured": true, 00:15:50.465 "data_offset": 2048, 00:15:50.465 "data_size": 63488 00:15:50.465 }, 00:15:50.465 { 00:15:50.465 "name": "BaseBdev2", 00:15:50.465 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:50.465 "is_configured": true, 00:15:50.465 "data_offset": 2048, 00:15:50.465 "data_size": 63488 00:15:50.465 }, 00:15:50.465 { 00:15:50.465 "name": "BaseBdev3", 00:15:50.465 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:50.465 "is_configured": true, 00:15:50.465 "data_offset": 2048, 00:15:50.465 "data_size": 63488 00:15:50.465 } 00:15:50.465 ] 00:15:50.465 }' 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.465 18:13:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.034 [2024-12-06 18:13:03.018480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.034 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.034 "name": "Existed_Raid", 00:15:51.034 "aliases": [ 00:15:51.034 "b8d6bcf8-c68e-46d8-94f5-21bd533f921a" 00:15:51.035 ], 00:15:51.035 "product_name": "Raid Volume", 00:15:51.035 "block_size": 512, 00:15:51.035 "num_blocks": 126976, 00:15:51.035 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:51.035 "assigned_rate_limits": { 00:15:51.035 "rw_ios_per_sec": 0, 00:15:51.035 "rw_mbytes_per_sec": 0, 00:15:51.035 "r_mbytes_per_sec": 0, 00:15:51.035 "w_mbytes_per_sec": 0 00:15:51.035 }, 00:15:51.035 "claimed": false, 00:15:51.035 "zoned": false, 00:15:51.035 "supported_io_types": { 00:15:51.035 "read": true, 00:15:51.035 "write": true, 00:15:51.035 "unmap": false, 00:15:51.035 "flush": false, 00:15:51.035 "reset": true, 00:15:51.035 "nvme_admin": false, 00:15:51.035 "nvme_io": false, 00:15:51.035 "nvme_io_md": false, 00:15:51.035 "write_zeroes": true, 00:15:51.035 "zcopy": false, 00:15:51.035 "get_zone_info": false, 00:15:51.035 "zone_management": false, 00:15:51.035 "zone_append": false, 00:15:51.035 "compare": false, 00:15:51.035 "compare_and_write": false, 00:15:51.035 "abort": false, 00:15:51.035 "seek_hole": false, 00:15:51.035 "seek_data": false, 00:15:51.035 "copy": false, 00:15:51.035 "nvme_iov_md": false 00:15:51.035 }, 00:15:51.035 "driver_specific": { 00:15:51.035 "raid": { 00:15:51.035 "uuid": "b8d6bcf8-c68e-46d8-94f5-21bd533f921a", 00:15:51.035 "strip_size_kb": 64, 00:15:51.035 "state": "online", 00:15:51.035 "raid_level": "raid5f", 00:15:51.035 "superblock": true, 00:15:51.035 "num_base_bdevs": 3, 00:15:51.035 "num_base_bdevs_discovered": 3, 00:15:51.035 "num_base_bdevs_operational": 3, 00:15:51.035 "base_bdevs_list": [ 00:15:51.035 { 00:15:51.035 "name": "NewBaseBdev", 00:15:51.035 "uuid": "7c4d591f-2b9d-4741-955a-bd5d9267ff62", 00:15:51.035 "is_configured": true, 00:15:51.035 "data_offset": 2048, 00:15:51.035 "data_size": 63488 00:15:51.035 }, 00:15:51.035 { 00:15:51.035 "name": "BaseBdev2", 00:15:51.035 "uuid": "3f3e393c-5cdf-44ea-be85-6b9a4a50e828", 00:15:51.035 "is_configured": true, 00:15:51.035 "data_offset": 2048, 00:15:51.035 "data_size": 63488 00:15:51.035 }, 00:15:51.035 { 00:15:51.035 "name": "BaseBdev3", 00:15:51.035 "uuid": "330694d4-94b5-4662-a099-a7d9e767c66b", 00:15:51.035 "is_configured": true, 00:15:51.035 "data_offset": 2048, 00:15:51.035 "data_size": 63488 00:15:51.035 } 00:15:51.035 ] 00:15:51.035 } 00:15:51.035 } 00:15:51.035 }' 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:51.035 BaseBdev2 00:15:51.035 BaseBdev3' 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.035 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.295 [2024-12-06 18:13:03.293788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.295 [2024-12-06 18:13:03.293913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.295 [2024-12-06 18:13:03.294030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.295 [2024-12-06 18:13:03.294392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.295 [2024-12-06 18:13:03.294412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81057 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81057 ']' 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81057 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81057 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81057' 00:15:51.295 killing process with pid 81057 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81057 00:15:51.295 [2024-12-06 18:13:03.341238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.295 18:13:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81057 00:15:51.555 [2024-12-06 18:13:03.693990] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.946 18:13:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:52.946 00:15:52.946 real 0m11.351s 00:15:52.946 user 0m17.871s 00:15:52.946 sys 0m2.035s 00:15:52.946 18:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.946 18:13:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.946 ************************************ 00:15:52.946 END TEST raid5f_state_function_test_sb 00:15:52.946 ************************************ 00:15:52.946 18:13:05 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:52.946 18:13:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:52.946 18:13:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.946 18:13:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.946 ************************************ 00:15:52.946 START TEST raid5f_superblock_test 00:15:52.946 ************************************ 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81680 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81680 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81680 ']' 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.946 18:13:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.203 [2024-12-06 18:13:05.183227] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:15:53.203 [2024-12-06 18:13:05.183374] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81680 ] 00:15:53.203 [2024-12-06 18:13:05.364809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.460 [2024-12-06 18:13:05.495119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.716 [2024-12-06 18:13:05.732836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.717 [2024-12-06 18:13:05.732915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.973 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.232 malloc1 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.232 [2024-12-06 18:13:06.145449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.232 [2024-12-06 18:13:06.145544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.232 [2024-12-06 18:13:06.145574] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.232 [2024-12-06 18:13:06.145585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.232 [2024-12-06 18:13:06.148160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.232 [2024-12-06 18:13:06.148210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.232 pt1 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.232 malloc2 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.232 [2024-12-06 18:13:06.206435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.232 [2024-12-06 18:13:06.206615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.232 [2024-12-06 18:13:06.206700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:54.232 [2024-12-06 18:13:06.206752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.232 [2024-12-06 18:13:06.209422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.232 [2024-12-06 18:13:06.209521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.232 pt2 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.232 malloc3 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.232 [2024-12-06 18:13:06.284414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.232 [2024-12-06 18:13:06.284569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.232 [2024-12-06 18:13:06.284643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:54.232 [2024-12-06 18:13:06.284694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.232 [2024-12-06 18:13:06.287594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.232 [2024-12-06 18:13:06.287747] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.232 pt3 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:54.232 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.233 [2024-12-06 18:13:06.296713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.233 [2024-12-06 18:13:06.298976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.233 [2024-12-06 18:13:06.299198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:54.233 [2024-12-06 18:13:06.299457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:54.233 [2024-12-06 18:13:06.299487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:54.233 [2024-12-06 18:13:06.299860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:54.233 [2024-12-06 18:13:06.306763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:54.233 [2024-12-06 18:13:06.306806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:54.233 [2024-12-06 18:13:06.307238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.233 "name": "raid_bdev1", 00:15:54.233 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:54.233 "strip_size_kb": 64, 00:15:54.233 "state": "online", 00:15:54.233 "raid_level": "raid5f", 00:15:54.233 "superblock": true, 00:15:54.233 "num_base_bdevs": 3, 00:15:54.233 "num_base_bdevs_discovered": 3, 00:15:54.233 "num_base_bdevs_operational": 3, 00:15:54.233 "base_bdevs_list": [ 00:15:54.233 { 00:15:54.233 "name": "pt1", 00:15:54.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.233 "is_configured": true, 00:15:54.233 "data_offset": 2048, 00:15:54.233 "data_size": 63488 00:15:54.233 }, 00:15:54.233 { 00:15:54.233 "name": "pt2", 00:15:54.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.233 "is_configured": true, 00:15:54.233 "data_offset": 2048, 00:15:54.233 "data_size": 63488 00:15:54.233 }, 00:15:54.233 { 00:15:54.233 "name": "pt3", 00:15:54.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.233 "is_configured": true, 00:15:54.233 "data_offset": 2048, 00:15:54.233 "data_size": 63488 00:15:54.233 } 00:15:54.233 ] 00:15:54.233 }' 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.233 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.800 [2024-12-06 18:13:06.742216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.800 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.800 "name": "raid_bdev1", 00:15:54.800 "aliases": [ 00:15:54.800 "fb561512-f663-49ec-b586-a48e68ddf30d" 00:15:54.800 ], 00:15:54.800 "product_name": "Raid Volume", 00:15:54.800 "block_size": 512, 00:15:54.800 "num_blocks": 126976, 00:15:54.800 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:54.800 "assigned_rate_limits": { 00:15:54.800 "rw_ios_per_sec": 0, 00:15:54.800 "rw_mbytes_per_sec": 0, 00:15:54.800 "r_mbytes_per_sec": 0, 00:15:54.800 "w_mbytes_per_sec": 0 00:15:54.800 }, 00:15:54.800 "claimed": false, 00:15:54.800 "zoned": false, 00:15:54.800 "supported_io_types": { 00:15:54.800 "read": true, 00:15:54.800 "write": true, 00:15:54.800 "unmap": false, 00:15:54.800 "flush": false, 00:15:54.800 "reset": true, 00:15:54.800 "nvme_admin": false, 00:15:54.800 "nvme_io": false, 00:15:54.800 "nvme_io_md": false, 00:15:54.800 "write_zeroes": true, 00:15:54.800 "zcopy": false, 00:15:54.800 "get_zone_info": false, 00:15:54.800 "zone_management": false, 00:15:54.800 "zone_append": false, 00:15:54.800 "compare": false, 00:15:54.800 "compare_and_write": false, 00:15:54.800 "abort": false, 00:15:54.800 "seek_hole": false, 00:15:54.800 "seek_data": false, 00:15:54.800 "copy": false, 00:15:54.800 "nvme_iov_md": false 00:15:54.800 }, 00:15:54.800 "driver_specific": { 00:15:54.800 "raid": { 00:15:54.800 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:54.800 "strip_size_kb": 64, 00:15:54.801 "state": "online", 00:15:54.801 "raid_level": "raid5f", 00:15:54.801 "superblock": true, 00:15:54.801 "num_base_bdevs": 3, 00:15:54.801 "num_base_bdevs_discovered": 3, 00:15:54.801 "num_base_bdevs_operational": 3, 00:15:54.801 "base_bdevs_list": [ 00:15:54.801 { 00:15:54.801 "name": "pt1", 00:15:54.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.801 "is_configured": true, 00:15:54.801 "data_offset": 2048, 00:15:54.801 "data_size": 63488 00:15:54.801 }, 00:15:54.801 { 00:15:54.801 "name": "pt2", 00:15:54.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.801 "is_configured": true, 00:15:54.801 "data_offset": 2048, 00:15:54.801 "data_size": 63488 00:15:54.801 }, 00:15:54.801 { 00:15:54.801 "name": "pt3", 00:15:54.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.801 "is_configured": true, 00:15:54.801 "data_offset": 2048, 00:15:54.801 "data_size": 63488 00:15:54.801 } 00:15:54.801 ] 00:15:54.801 } 00:15:54.801 } 00:15:54.801 }' 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:54.801 pt2 00:15:54.801 pt3' 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.801 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.060 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.060 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.060 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.060 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:55.060 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.060 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 18:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.060 18:13:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:55.060 [2024-12-06 18:13:07.041642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fb561512-f663-49ec-b586-a48e68ddf30d 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fb561512-f663-49ec-b586-a48e68ddf30d ']' 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 [2024-12-06 18:13:07.089328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.060 [2024-12-06 18:13:07.089451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.060 [2024-12-06 18:13:07.089567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.060 [2024-12-06 18:13:07.089660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.060 [2024-12-06 18:13:07.089672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.318 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.319 [2024-12-06 18:13:07.245155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:55.319 [2024-12-06 18:13:07.247424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:55.319 [2024-12-06 18:13:07.247579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:55.319 [2024-12-06 18:13:07.247718] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:55.319 [2024-12-06 18:13:07.247890] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:55.319 [2024-12-06 18:13:07.247970] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:55.319 [2024-12-06 18:13:07.248028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.319 [2024-12-06 18:13:07.248043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:55.319 request: 00:15:55.319 { 00:15:55.319 "name": "raid_bdev1", 00:15:55.319 "raid_level": "raid5f", 00:15:55.319 "base_bdevs": [ 00:15:55.319 "malloc1", 00:15:55.319 "malloc2", 00:15:55.319 "malloc3" 00:15:55.319 ], 00:15:55.319 "strip_size_kb": 64, 00:15:55.319 "superblock": false, 00:15:55.319 "method": "bdev_raid_create", 00:15:55.319 "req_id": 1 00:15:55.319 } 00:15:55.319 Got JSON-RPC error response 00:15:55.319 response: 00:15:55.319 { 00:15:55.319 "code": -17, 00:15:55.319 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:55.319 } 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.319 [2024-12-06 18:13:07.312970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.319 [2024-12-06 18:13:07.313058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.319 [2024-12-06 18:13:07.313095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:55.319 [2024-12-06 18:13:07.313107] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.319 [2024-12-06 18:13:07.315755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.319 [2024-12-06 18:13:07.315810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.319 [2024-12-06 18:13:07.315927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.319 [2024-12-06 18:13:07.316002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.319 pt1 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.319 "name": "raid_bdev1", 00:15:55.319 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:55.319 "strip_size_kb": 64, 00:15:55.319 "state": "configuring", 00:15:55.319 "raid_level": "raid5f", 00:15:55.319 "superblock": true, 00:15:55.319 "num_base_bdevs": 3, 00:15:55.319 "num_base_bdevs_discovered": 1, 00:15:55.319 "num_base_bdevs_operational": 3, 00:15:55.319 "base_bdevs_list": [ 00:15:55.319 { 00:15:55.319 "name": "pt1", 00:15:55.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.319 "is_configured": true, 00:15:55.319 "data_offset": 2048, 00:15:55.319 "data_size": 63488 00:15:55.319 }, 00:15:55.319 { 00:15:55.319 "name": null, 00:15:55.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.319 "is_configured": false, 00:15:55.319 "data_offset": 2048, 00:15:55.319 "data_size": 63488 00:15:55.319 }, 00:15:55.319 { 00:15:55.319 "name": null, 00:15:55.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.319 "is_configured": false, 00:15:55.319 "data_offset": 2048, 00:15:55.319 "data_size": 63488 00:15:55.319 } 00:15:55.319 ] 00:15:55.319 }' 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.319 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.889 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:55.889 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.889 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.889 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.889 [2024-12-06 18:13:07.820158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.889 [2024-12-06 18:13:07.820254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.889 [2024-12-06 18:13:07.820283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:55.889 [2024-12-06 18:13:07.820294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.889 [2024-12-06 18:13:07.820826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.889 [2024-12-06 18:13:07.820856] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.889 [2024-12-06 18:13:07.820966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:55.890 [2024-12-06 18:13:07.820999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.890 pt2 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.890 [2024-12-06 18:13:07.832193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.890 "name": "raid_bdev1", 00:15:55.890 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:55.890 "strip_size_kb": 64, 00:15:55.890 "state": "configuring", 00:15:55.890 "raid_level": "raid5f", 00:15:55.890 "superblock": true, 00:15:55.890 "num_base_bdevs": 3, 00:15:55.890 "num_base_bdevs_discovered": 1, 00:15:55.890 "num_base_bdevs_operational": 3, 00:15:55.890 "base_bdevs_list": [ 00:15:55.890 { 00:15:55.890 "name": "pt1", 00:15:55.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.890 "is_configured": true, 00:15:55.890 "data_offset": 2048, 00:15:55.890 "data_size": 63488 00:15:55.890 }, 00:15:55.890 { 00:15:55.890 "name": null, 00:15:55.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.890 "is_configured": false, 00:15:55.890 "data_offset": 0, 00:15:55.890 "data_size": 63488 00:15:55.890 }, 00:15:55.890 { 00:15:55.890 "name": null, 00:15:55.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.890 "is_configured": false, 00:15:55.890 "data_offset": 2048, 00:15:55.890 "data_size": 63488 00:15:55.890 } 00:15:55.890 ] 00:15:55.890 }' 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.890 18:13:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.150 [2024-12-06 18:13:08.283829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.150 [2024-12-06 18:13:08.284007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.150 [2024-12-06 18:13:08.284035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:56.150 [2024-12-06 18:13:08.284049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.150 [2024-12-06 18:13:08.284632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.150 [2024-12-06 18:13:08.284659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.150 [2024-12-06 18:13:08.284760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:56.150 [2024-12-06 18:13:08.284790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.150 pt2 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.150 [2024-12-06 18:13:08.295861] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:56.150 [2024-12-06 18:13:08.295953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.150 [2024-12-06 18:13:08.295974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:56.150 [2024-12-06 18:13:08.295986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.150 [2024-12-06 18:13:08.296543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.150 [2024-12-06 18:13:08.296572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:56.150 [2024-12-06 18:13:08.296672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:56.150 [2024-12-06 18:13:08.296702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:56.150 [2024-12-06 18:13:08.296887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:56.150 [2024-12-06 18:13:08.296903] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:56.150 [2024-12-06 18:13:08.297204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:56.150 [2024-12-06 18:13:08.303768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:56.150 [2024-12-06 18:13:08.303806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:56.150 [2024-12-06 18:13:08.304106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.150 pt3 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.150 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.410 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.410 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.410 "name": "raid_bdev1", 00:15:56.410 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:56.410 "strip_size_kb": 64, 00:15:56.410 "state": "online", 00:15:56.410 "raid_level": "raid5f", 00:15:56.410 "superblock": true, 00:15:56.410 "num_base_bdevs": 3, 00:15:56.410 "num_base_bdevs_discovered": 3, 00:15:56.410 "num_base_bdevs_operational": 3, 00:15:56.410 "base_bdevs_list": [ 00:15:56.410 { 00:15:56.410 "name": "pt1", 00:15:56.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.410 "is_configured": true, 00:15:56.410 "data_offset": 2048, 00:15:56.410 "data_size": 63488 00:15:56.410 }, 00:15:56.410 { 00:15:56.410 "name": "pt2", 00:15:56.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.410 "is_configured": true, 00:15:56.410 "data_offset": 2048, 00:15:56.410 "data_size": 63488 00:15:56.410 }, 00:15:56.410 { 00:15:56.410 "name": "pt3", 00:15:56.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.410 "is_configured": true, 00:15:56.410 "data_offset": 2048, 00:15:56.410 "data_size": 63488 00:15:56.410 } 00:15:56.410 ] 00:15:56.410 }' 00:15:56.410 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.410 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.670 [2024-12-06 18:13:08.795115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.670 "name": "raid_bdev1", 00:15:56.670 "aliases": [ 00:15:56.670 "fb561512-f663-49ec-b586-a48e68ddf30d" 00:15:56.670 ], 00:15:56.670 "product_name": "Raid Volume", 00:15:56.670 "block_size": 512, 00:15:56.670 "num_blocks": 126976, 00:15:56.670 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:56.670 "assigned_rate_limits": { 00:15:56.670 "rw_ios_per_sec": 0, 00:15:56.670 "rw_mbytes_per_sec": 0, 00:15:56.670 "r_mbytes_per_sec": 0, 00:15:56.670 "w_mbytes_per_sec": 0 00:15:56.670 }, 00:15:56.670 "claimed": false, 00:15:56.670 "zoned": false, 00:15:56.670 "supported_io_types": { 00:15:56.670 "read": true, 00:15:56.670 "write": true, 00:15:56.670 "unmap": false, 00:15:56.670 "flush": false, 00:15:56.670 "reset": true, 00:15:56.670 "nvme_admin": false, 00:15:56.670 "nvme_io": false, 00:15:56.670 "nvme_io_md": false, 00:15:56.670 "write_zeroes": true, 00:15:56.670 "zcopy": false, 00:15:56.670 "get_zone_info": false, 00:15:56.670 "zone_management": false, 00:15:56.670 "zone_append": false, 00:15:56.670 "compare": false, 00:15:56.670 "compare_and_write": false, 00:15:56.670 "abort": false, 00:15:56.670 "seek_hole": false, 00:15:56.670 "seek_data": false, 00:15:56.670 "copy": false, 00:15:56.670 "nvme_iov_md": false 00:15:56.670 }, 00:15:56.670 "driver_specific": { 00:15:56.670 "raid": { 00:15:56.670 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:56.670 "strip_size_kb": 64, 00:15:56.670 "state": "online", 00:15:56.670 "raid_level": "raid5f", 00:15:56.670 "superblock": true, 00:15:56.670 "num_base_bdevs": 3, 00:15:56.670 "num_base_bdevs_discovered": 3, 00:15:56.670 "num_base_bdevs_operational": 3, 00:15:56.670 "base_bdevs_list": [ 00:15:56.670 { 00:15:56.670 "name": "pt1", 00:15:56.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.670 "is_configured": true, 00:15:56.670 "data_offset": 2048, 00:15:56.670 "data_size": 63488 00:15:56.670 }, 00:15:56.670 { 00:15:56.670 "name": "pt2", 00:15:56.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.670 "is_configured": true, 00:15:56.670 "data_offset": 2048, 00:15:56.670 "data_size": 63488 00:15:56.670 }, 00:15:56.670 { 00:15:56.670 "name": "pt3", 00:15:56.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.670 "is_configured": true, 00:15:56.670 "data_offset": 2048, 00:15:56.670 "data_size": 63488 00:15:56.670 } 00:15:56.670 ] 00:15:56.670 } 00:15:56.670 } 00:15:56.670 }' 00:15:56.670 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:56.930 pt2 00:15:56.930 pt3' 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.930 18:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.930 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.930 [2024-12-06 18:13:09.082625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fb561512-f663-49ec-b586-a48e68ddf30d '!=' fb561512-f663-49ec-b586-a48e68ddf30d ']' 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.190 [2024-12-06 18:13:09.126371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.190 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.190 "name": "raid_bdev1", 00:15:57.190 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:57.190 "strip_size_kb": 64, 00:15:57.190 "state": "online", 00:15:57.190 "raid_level": "raid5f", 00:15:57.190 "superblock": true, 00:15:57.190 "num_base_bdevs": 3, 00:15:57.190 "num_base_bdevs_discovered": 2, 00:15:57.190 "num_base_bdevs_operational": 2, 00:15:57.190 "base_bdevs_list": [ 00:15:57.190 { 00:15:57.191 "name": null, 00:15:57.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.191 "is_configured": false, 00:15:57.191 "data_offset": 0, 00:15:57.191 "data_size": 63488 00:15:57.191 }, 00:15:57.191 { 00:15:57.191 "name": "pt2", 00:15:57.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.191 "is_configured": true, 00:15:57.191 "data_offset": 2048, 00:15:57.191 "data_size": 63488 00:15:57.191 }, 00:15:57.191 { 00:15:57.191 "name": "pt3", 00:15:57.191 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.191 "is_configured": true, 00:15:57.191 "data_offset": 2048, 00:15:57.191 "data_size": 63488 00:15:57.191 } 00:15:57.191 ] 00:15:57.191 }' 00:15:57.191 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.191 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.453 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.453 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.453 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.453 [2024-12-06 18:13:09.577545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.453 [2024-12-06 18:13:09.577662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.453 [2024-12-06 18:13:09.577796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.453 [2024-12-06 18:13:09.577869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.453 [2024-12-06 18:13:09.577888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:57.453 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.453 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.453 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:57.453 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.453 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.453 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 [2024-12-06 18:13:09.665387] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.713 [2024-12-06 18:13:09.665556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.713 [2024-12-06 18:13:09.665618] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:57.713 [2024-12-06 18:13:09.665662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.713 [2024-12-06 18:13:09.668362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.713 [2024-12-06 18:13:09.668497] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.713 [2024-12-06 18:13:09.668672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:57.713 [2024-12-06 18:13:09.668794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.713 pt2 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.713 "name": "raid_bdev1", 00:15:57.713 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:57.713 "strip_size_kb": 64, 00:15:57.713 "state": "configuring", 00:15:57.713 "raid_level": "raid5f", 00:15:57.713 "superblock": true, 00:15:57.713 "num_base_bdevs": 3, 00:15:57.713 "num_base_bdevs_discovered": 1, 00:15:57.713 "num_base_bdevs_operational": 2, 00:15:57.713 "base_bdevs_list": [ 00:15:57.713 { 00:15:57.713 "name": null, 00:15:57.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.713 "is_configured": false, 00:15:57.713 "data_offset": 2048, 00:15:57.713 "data_size": 63488 00:15:57.713 }, 00:15:57.713 { 00:15:57.713 "name": "pt2", 00:15:57.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.713 "is_configured": true, 00:15:57.713 "data_offset": 2048, 00:15:57.713 "data_size": 63488 00:15:57.713 }, 00:15:57.713 { 00:15:57.713 "name": null, 00:15:57.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.713 "is_configured": false, 00:15:57.713 "data_offset": 2048, 00:15:57.713 "data_size": 63488 00:15:57.713 } 00:15:57.713 ] 00:15:57.713 }' 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.713 18:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.283 [2024-12-06 18:13:10.156595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.283 [2024-12-06 18:13:10.156697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.283 [2024-12-06 18:13:10.156724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:58.283 [2024-12-06 18:13:10.156737] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.283 [2024-12-06 18:13:10.157355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.283 [2024-12-06 18:13:10.157381] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.283 [2024-12-06 18:13:10.157481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:58.283 [2024-12-06 18:13:10.157513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.283 [2024-12-06 18:13:10.157650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:58.283 [2024-12-06 18:13:10.157663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:58.283 [2024-12-06 18:13:10.157961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:58.283 [2024-12-06 18:13:10.164777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:58.283 pt3 00:15:58.283 [2024-12-06 18:13:10.164933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:58.283 [2024-12-06 18:13:10.165411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.283 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.283 "name": "raid_bdev1", 00:15:58.283 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:58.283 "strip_size_kb": 64, 00:15:58.283 "state": "online", 00:15:58.283 "raid_level": "raid5f", 00:15:58.283 "superblock": true, 00:15:58.283 "num_base_bdevs": 3, 00:15:58.283 "num_base_bdevs_discovered": 2, 00:15:58.283 "num_base_bdevs_operational": 2, 00:15:58.283 "base_bdevs_list": [ 00:15:58.283 { 00:15:58.283 "name": null, 00:15:58.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.284 "is_configured": false, 00:15:58.284 "data_offset": 2048, 00:15:58.284 "data_size": 63488 00:15:58.284 }, 00:15:58.284 { 00:15:58.284 "name": "pt2", 00:15:58.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.284 "is_configured": true, 00:15:58.284 "data_offset": 2048, 00:15:58.284 "data_size": 63488 00:15:58.284 }, 00:15:58.284 { 00:15:58.284 "name": "pt3", 00:15:58.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.284 "is_configured": true, 00:15:58.284 "data_offset": 2048, 00:15:58.284 "data_size": 63488 00:15:58.284 } 00:15:58.284 ] 00:15:58.284 }' 00:15:58.284 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.284 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.544 [2024-12-06 18:13:10.613559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.544 [2024-12-06 18:13:10.613705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.544 [2024-12-06 18:13:10.613824] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.544 [2024-12-06 18:13:10.613909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.544 [2024-12-06 18:13:10.613922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.544 [2024-12-06 18:13:10.685492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.544 [2024-12-06 18:13:10.685584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.544 [2024-12-06 18:13:10.685609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:58.544 [2024-12-06 18:13:10.685620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.544 [2024-12-06 18:13:10.688540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.544 [2024-12-06 18:13:10.688606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.544 [2024-12-06 18:13:10.688736] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:58.544 [2024-12-06 18:13:10.688815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.544 [2024-12-06 18:13:10.689029] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:58.544 [2024-12-06 18:13:10.689047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.544 [2024-12-06 18:13:10.689098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:58.544 [2024-12-06 18:13:10.689167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.544 pt1 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.544 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.804 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.804 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.804 "name": "raid_bdev1", 00:15:58.804 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:58.804 "strip_size_kb": 64, 00:15:58.804 "state": "configuring", 00:15:58.804 "raid_level": "raid5f", 00:15:58.804 "superblock": true, 00:15:58.804 "num_base_bdevs": 3, 00:15:58.804 "num_base_bdevs_discovered": 1, 00:15:58.804 "num_base_bdevs_operational": 2, 00:15:58.804 "base_bdevs_list": [ 00:15:58.804 { 00:15:58.804 "name": null, 00:15:58.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.804 "is_configured": false, 00:15:58.804 "data_offset": 2048, 00:15:58.804 "data_size": 63488 00:15:58.804 }, 00:15:58.804 { 00:15:58.804 "name": "pt2", 00:15:58.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.804 "is_configured": true, 00:15:58.804 "data_offset": 2048, 00:15:58.804 "data_size": 63488 00:15:58.804 }, 00:15:58.804 { 00:15:58.804 "name": null, 00:15:58.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.804 "is_configured": false, 00:15:58.804 "data_offset": 2048, 00:15:58.804 "data_size": 63488 00:15:58.804 } 00:15:58.804 ] 00:15:58.804 }' 00:15:58.804 18:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.804 18:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.064 [2024-12-06 18:13:11.172719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:59.064 [2024-12-06 18:13:11.172815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.064 [2024-12-06 18:13:11.172842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:59.064 [2024-12-06 18:13:11.172854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.064 [2024-12-06 18:13:11.173509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.064 [2024-12-06 18:13:11.173550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:59.064 [2024-12-06 18:13:11.173660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:59.064 [2024-12-06 18:13:11.173690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:59.064 [2024-12-06 18:13:11.173853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:59.064 [2024-12-06 18:13:11.173865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:59.064 [2024-12-06 18:13:11.174214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:59.064 [2024-12-06 18:13:11.181839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:59.064 [2024-12-06 18:13:11.181886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:59.064 [2024-12-06 18:13:11.182290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.064 pt3 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.064 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.323 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.323 "name": "raid_bdev1", 00:15:59.323 "uuid": "fb561512-f663-49ec-b586-a48e68ddf30d", 00:15:59.323 "strip_size_kb": 64, 00:15:59.323 "state": "online", 00:15:59.323 "raid_level": "raid5f", 00:15:59.323 "superblock": true, 00:15:59.323 "num_base_bdevs": 3, 00:15:59.323 "num_base_bdevs_discovered": 2, 00:15:59.324 "num_base_bdevs_operational": 2, 00:15:59.324 "base_bdevs_list": [ 00:15:59.324 { 00:15:59.324 "name": null, 00:15:59.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.324 "is_configured": false, 00:15:59.324 "data_offset": 2048, 00:15:59.324 "data_size": 63488 00:15:59.324 }, 00:15:59.324 { 00:15:59.324 "name": "pt2", 00:15:59.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.324 "is_configured": true, 00:15:59.324 "data_offset": 2048, 00:15:59.324 "data_size": 63488 00:15:59.324 }, 00:15:59.324 { 00:15:59.324 "name": "pt3", 00:15:59.324 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.324 "is_configured": true, 00:15:59.324 "data_offset": 2048, 00:15:59.324 "data_size": 63488 00:15:59.324 } 00:15:59.324 ] 00:15:59.324 }' 00:15:59.324 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.324 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.584 [2024-12-06 18:13:11.710813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fb561512-f663-49ec-b586-a48e68ddf30d '!=' fb561512-f663-49ec-b586-a48e68ddf30d ']' 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81680 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81680 ']' 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81680 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.584 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81680 00:15:59.845 killing process with pid 81680 00:15:59.845 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.845 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.845 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81680' 00:15:59.845 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81680 00:15:59.845 [2024-12-06 18:13:11.768309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.845 18:13:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81680 00:15:59.845 [2024-12-06 18:13:11.768431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.845 [2024-12-06 18:13:11.768512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.845 [2024-12-06 18:13:11.768534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:00.104 [2024-12-06 18:13:12.123365] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.483 ************************************ 00:16:01.483 END TEST raid5f_superblock_test 00:16:01.483 ************************************ 00:16:01.483 18:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:01.483 00:16:01.483 real 0m8.340s 00:16:01.483 user 0m12.929s 00:16:01.483 sys 0m1.456s 00:16:01.483 18:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.483 18:13:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.483 18:13:13 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:01.483 18:13:13 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:01.483 18:13:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:01.483 18:13:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.483 18:13:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.483 ************************************ 00:16:01.483 START TEST raid5f_rebuild_test 00:16:01.483 ************************************ 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:01.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82132 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82132 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82132 ']' 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.483 18:13:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:01.483 [2024-12-06 18:13:13.580854] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:16:01.483 [2024-12-06 18:13:13.581094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82132 ] 00:16:01.483 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:01.483 Zero copy mechanism will not be used. 00:16:01.744 [2024-12-06 18:13:13.758096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.744 [2024-12-06 18:13:13.880467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.003 [2024-12-06 18:13:14.095045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.003 [2024-12-06 18:13:14.095202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.262 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.262 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:02.262 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.262 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:02.262 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.262 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.522 BaseBdev1_malloc 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.522 [2024-12-06 18:13:14.467680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:02.522 [2024-12-06 18:13:14.467758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.522 [2024-12-06 18:13:14.467785] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:02.522 [2024-12-06 18:13:14.467798] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.522 [2024-12-06 18:13:14.470262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.522 [2024-12-06 18:13:14.470312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:02.522 BaseBdev1 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.522 BaseBdev2_malloc 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.522 [2024-12-06 18:13:14.530485] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:02.522 [2024-12-06 18:13:14.530564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.522 [2024-12-06 18:13:14.530593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:02.522 [2024-12-06 18:13:14.530606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.522 [2024-12-06 18:13:14.533144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.522 [2024-12-06 18:13:14.533179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:02.522 BaseBdev2 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.522 BaseBdev3_malloc 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.522 [2024-12-06 18:13:14.601183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:02.522 [2024-12-06 18:13:14.601307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.522 [2024-12-06 18:13:14.601354] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.522 [2024-12-06 18:13:14.601394] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.522 [2024-12-06 18:13:14.603777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.522 [2024-12-06 18:13:14.603862] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:02.522 BaseBdev3 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.522 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.523 spare_malloc 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.523 spare_delay 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.523 [2024-12-06 18:13:14.676353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:02.523 [2024-12-06 18:13:14.676471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.523 [2024-12-06 18:13:14.676515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:02.523 [2024-12-06 18:13:14.676574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.523 [2024-12-06 18:13:14.679122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.523 [2024-12-06 18:13:14.679207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:02.523 spare 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.523 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.883 [2024-12-06 18:13:14.688430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.883 [2024-12-06 18:13:14.690657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.883 [2024-12-06 18:13:14.690802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.883 [2024-12-06 18:13:14.690922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:02.883 [2024-12-06 18:13:14.690937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:02.883 [2024-12-06 18:13:14.691293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:02.883 [2024-12-06 18:13:14.698364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:02.883 [2024-12-06 18:13:14.698440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:02.883 [2024-12-06 18:13:14.698801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.883 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.884 "name": "raid_bdev1", 00:16:02.884 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:02.884 "strip_size_kb": 64, 00:16:02.884 "state": "online", 00:16:02.884 "raid_level": "raid5f", 00:16:02.884 "superblock": false, 00:16:02.884 "num_base_bdevs": 3, 00:16:02.884 "num_base_bdevs_discovered": 3, 00:16:02.884 "num_base_bdevs_operational": 3, 00:16:02.884 "base_bdevs_list": [ 00:16:02.884 { 00:16:02.884 "name": "BaseBdev1", 00:16:02.884 "uuid": "ecf8392b-ecb4-535d-9898-b851fea9aeee", 00:16:02.884 "is_configured": true, 00:16:02.884 "data_offset": 0, 00:16:02.884 "data_size": 65536 00:16:02.884 }, 00:16:02.884 { 00:16:02.884 "name": "BaseBdev2", 00:16:02.884 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:02.884 "is_configured": true, 00:16:02.884 "data_offset": 0, 00:16:02.884 "data_size": 65536 00:16:02.884 }, 00:16:02.884 { 00:16:02.884 "name": "BaseBdev3", 00:16:02.884 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:02.884 "is_configured": true, 00:16:02.884 "data_offset": 0, 00:16:02.884 "data_size": 65536 00:16:02.884 } 00:16:02.884 ] 00:16:02.884 }' 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.884 18:13:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:03.151 [2024-12-06 18:13:15.197719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.151 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:03.410 [2024-12-06 18:13:15.493086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:03.410 /dev/nbd0 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.410 1+0 records in 00:16:03.410 1+0 records out 00:16:03.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300822 s, 13.6 MB/s 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:03.410 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:03.978 512+0 records in 00:16:03.978 512+0 records out 00:16:03.978 67108864 bytes (67 MB, 64 MiB) copied, 0.428429 s, 157 MB/s 00:16:03.978 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:03.978 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.978 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:03.978 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:03.978 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:03.978 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.978 18:13:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:04.237 [2024-12-06 18:13:16.276611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 [2024-12-06 18:13:16.289806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.237 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.238 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.238 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.238 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.238 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.238 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.238 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.238 "name": "raid_bdev1", 00:16:04.238 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:04.238 "strip_size_kb": 64, 00:16:04.238 "state": "online", 00:16:04.238 "raid_level": "raid5f", 00:16:04.238 "superblock": false, 00:16:04.238 "num_base_bdevs": 3, 00:16:04.238 "num_base_bdevs_discovered": 2, 00:16:04.238 "num_base_bdevs_operational": 2, 00:16:04.238 "base_bdevs_list": [ 00:16:04.238 { 00:16:04.238 "name": null, 00:16:04.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.238 "is_configured": false, 00:16:04.238 "data_offset": 0, 00:16:04.238 "data_size": 65536 00:16:04.238 }, 00:16:04.238 { 00:16:04.238 "name": "BaseBdev2", 00:16:04.238 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:04.238 "is_configured": true, 00:16:04.238 "data_offset": 0, 00:16:04.238 "data_size": 65536 00:16:04.238 }, 00:16:04.238 { 00:16:04.238 "name": "BaseBdev3", 00:16:04.238 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:04.238 "is_configured": true, 00:16:04.238 "data_offset": 0, 00:16:04.238 "data_size": 65536 00:16:04.238 } 00:16:04.238 ] 00:16:04.238 }' 00:16:04.238 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.238 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.803 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:04.803 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.803 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.803 [2024-12-06 18:13:16.749217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.803 [2024-12-06 18:13:16.770994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:04.803 18:13:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.803 18:13:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:04.803 [2024-12-06 18:13:16.781519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.738 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.738 "name": "raid_bdev1", 00:16:05.738 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:05.738 "strip_size_kb": 64, 00:16:05.738 "state": "online", 00:16:05.738 "raid_level": "raid5f", 00:16:05.738 "superblock": false, 00:16:05.738 "num_base_bdevs": 3, 00:16:05.738 "num_base_bdevs_discovered": 3, 00:16:05.738 "num_base_bdevs_operational": 3, 00:16:05.738 "process": { 00:16:05.738 "type": "rebuild", 00:16:05.738 "target": "spare", 00:16:05.738 "progress": { 00:16:05.738 "blocks": 18432, 00:16:05.738 "percent": 14 00:16:05.738 } 00:16:05.738 }, 00:16:05.738 "base_bdevs_list": [ 00:16:05.738 { 00:16:05.739 "name": "spare", 00:16:05.739 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:05.739 "is_configured": true, 00:16:05.739 "data_offset": 0, 00:16:05.739 "data_size": 65536 00:16:05.739 }, 00:16:05.739 { 00:16:05.739 "name": "BaseBdev2", 00:16:05.739 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:05.739 "is_configured": true, 00:16:05.739 "data_offset": 0, 00:16:05.739 "data_size": 65536 00:16:05.739 }, 00:16:05.739 { 00:16:05.739 "name": "BaseBdev3", 00:16:05.739 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:05.739 "is_configured": true, 00:16:05.739 "data_offset": 0, 00:16:05.739 "data_size": 65536 00:16:05.739 } 00:16:05.739 ] 00:16:05.739 }' 00:16:05.739 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.739 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.739 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.997 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.997 18:13:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:05.997 18:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.997 18:13:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.997 [2024-12-06 18:13:17.921871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.997 [2024-12-06 18:13:17.995373] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:05.997 [2024-12-06 18:13:17.995582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.997 [2024-12-06 18:13:17.995657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.997 [2024-12-06 18:13:17.995699] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.997 "name": "raid_bdev1", 00:16:05.997 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:05.997 "strip_size_kb": 64, 00:16:05.997 "state": "online", 00:16:05.997 "raid_level": "raid5f", 00:16:05.997 "superblock": false, 00:16:05.997 "num_base_bdevs": 3, 00:16:05.997 "num_base_bdevs_discovered": 2, 00:16:05.997 "num_base_bdevs_operational": 2, 00:16:05.997 "base_bdevs_list": [ 00:16:05.997 { 00:16:05.997 "name": null, 00:16:05.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.997 "is_configured": false, 00:16:05.997 "data_offset": 0, 00:16:05.997 "data_size": 65536 00:16:05.997 }, 00:16:05.997 { 00:16:05.997 "name": "BaseBdev2", 00:16:05.997 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:05.997 "is_configured": true, 00:16:05.997 "data_offset": 0, 00:16:05.997 "data_size": 65536 00:16:05.997 }, 00:16:05.997 { 00:16:05.997 "name": "BaseBdev3", 00:16:05.997 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:05.997 "is_configured": true, 00:16:05.997 "data_offset": 0, 00:16:05.997 "data_size": 65536 00:16:05.997 } 00:16:05.997 ] 00:16:05.997 }' 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.997 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.563 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.563 "name": "raid_bdev1", 00:16:06.563 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:06.563 "strip_size_kb": 64, 00:16:06.563 "state": "online", 00:16:06.563 "raid_level": "raid5f", 00:16:06.563 "superblock": false, 00:16:06.563 "num_base_bdevs": 3, 00:16:06.563 "num_base_bdevs_discovered": 2, 00:16:06.563 "num_base_bdevs_operational": 2, 00:16:06.563 "base_bdevs_list": [ 00:16:06.563 { 00:16:06.563 "name": null, 00:16:06.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.563 "is_configured": false, 00:16:06.563 "data_offset": 0, 00:16:06.563 "data_size": 65536 00:16:06.563 }, 00:16:06.563 { 00:16:06.563 "name": "BaseBdev2", 00:16:06.563 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:06.563 "is_configured": true, 00:16:06.563 "data_offset": 0, 00:16:06.564 "data_size": 65536 00:16:06.564 }, 00:16:06.564 { 00:16:06.564 "name": "BaseBdev3", 00:16:06.564 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:06.564 "is_configured": true, 00:16:06.564 "data_offset": 0, 00:16:06.564 "data_size": 65536 00:16:06.564 } 00:16:06.564 ] 00:16:06.564 }' 00:16:06.564 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.564 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.564 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.564 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.564 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:06.564 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.564 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.564 [2024-12-06 18:13:18.633993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.564 [2024-12-06 18:13:18.654477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:06.564 18:13:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.564 18:13:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:06.564 [2024-12-06 18:13:18.664988] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:07.496 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.496 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.496 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.496 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.496 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.754 "name": "raid_bdev1", 00:16:07.754 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:07.754 "strip_size_kb": 64, 00:16:07.754 "state": "online", 00:16:07.754 "raid_level": "raid5f", 00:16:07.754 "superblock": false, 00:16:07.754 "num_base_bdevs": 3, 00:16:07.754 "num_base_bdevs_discovered": 3, 00:16:07.754 "num_base_bdevs_operational": 3, 00:16:07.754 "process": { 00:16:07.754 "type": "rebuild", 00:16:07.754 "target": "spare", 00:16:07.754 "progress": { 00:16:07.754 "blocks": 18432, 00:16:07.754 "percent": 14 00:16:07.754 } 00:16:07.754 }, 00:16:07.754 "base_bdevs_list": [ 00:16:07.754 { 00:16:07.754 "name": "spare", 00:16:07.754 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:07.754 "is_configured": true, 00:16:07.754 "data_offset": 0, 00:16:07.754 "data_size": 65536 00:16:07.754 }, 00:16:07.754 { 00:16:07.754 "name": "BaseBdev2", 00:16:07.754 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:07.754 "is_configured": true, 00:16:07.754 "data_offset": 0, 00:16:07.754 "data_size": 65536 00:16:07.754 }, 00:16:07.754 { 00:16:07.754 "name": "BaseBdev3", 00:16:07.754 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:07.754 "is_configured": true, 00:16:07.754 "data_offset": 0, 00:16:07.754 "data_size": 65536 00:16:07.754 } 00:16:07.754 ] 00:16:07.754 }' 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=573 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.754 "name": "raid_bdev1", 00:16:07.754 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:07.754 "strip_size_kb": 64, 00:16:07.754 "state": "online", 00:16:07.754 "raid_level": "raid5f", 00:16:07.754 "superblock": false, 00:16:07.754 "num_base_bdevs": 3, 00:16:07.754 "num_base_bdevs_discovered": 3, 00:16:07.754 "num_base_bdevs_operational": 3, 00:16:07.754 "process": { 00:16:07.754 "type": "rebuild", 00:16:07.754 "target": "spare", 00:16:07.754 "progress": { 00:16:07.754 "blocks": 22528, 00:16:07.754 "percent": 17 00:16:07.754 } 00:16:07.754 }, 00:16:07.754 "base_bdevs_list": [ 00:16:07.754 { 00:16:07.754 "name": "spare", 00:16:07.754 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:07.754 "is_configured": true, 00:16:07.754 "data_offset": 0, 00:16:07.754 "data_size": 65536 00:16:07.754 }, 00:16:07.754 { 00:16:07.754 "name": "BaseBdev2", 00:16:07.754 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:07.754 "is_configured": true, 00:16:07.754 "data_offset": 0, 00:16:07.754 "data_size": 65536 00:16:07.754 }, 00:16:07.754 { 00:16:07.754 "name": "BaseBdev3", 00:16:07.754 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:07.754 "is_configured": true, 00:16:07.754 "data_offset": 0, 00:16:07.754 "data_size": 65536 00:16:07.754 } 00:16:07.754 ] 00:16:07.754 }' 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.754 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.012 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.012 18:13:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.945 18:13:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.945 "name": "raid_bdev1", 00:16:08.945 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:08.945 "strip_size_kb": 64, 00:16:08.945 "state": "online", 00:16:08.945 "raid_level": "raid5f", 00:16:08.945 "superblock": false, 00:16:08.945 "num_base_bdevs": 3, 00:16:08.945 "num_base_bdevs_discovered": 3, 00:16:08.945 "num_base_bdevs_operational": 3, 00:16:08.945 "process": { 00:16:08.945 "type": "rebuild", 00:16:08.945 "target": "spare", 00:16:08.945 "progress": { 00:16:08.945 "blocks": 45056, 00:16:08.945 "percent": 34 00:16:08.945 } 00:16:08.945 }, 00:16:08.945 "base_bdevs_list": [ 00:16:08.945 { 00:16:08.945 "name": "spare", 00:16:08.945 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:08.945 "is_configured": true, 00:16:08.945 "data_offset": 0, 00:16:08.945 "data_size": 65536 00:16:08.945 }, 00:16:08.945 { 00:16:08.945 "name": "BaseBdev2", 00:16:08.945 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:08.945 "is_configured": true, 00:16:08.945 "data_offset": 0, 00:16:08.945 "data_size": 65536 00:16:08.945 }, 00:16:08.945 { 00:16:08.945 "name": "BaseBdev3", 00:16:08.945 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:08.945 "is_configured": true, 00:16:08.945 "data_offset": 0, 00:16:08.945 "data_size": 65536 00:16:08.945 } 00:16:08.945 ] 00:16:08.945 }' 00:16:08.945 18:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.945 18:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.945 18:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.945 18:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.945 18:13:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.324 "name": "raid_bdev1", 00:16:10.324 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:10.324 "strip_size_kb": 64, 00:16:10.324 "state": "online", 00:16:10.324 "raid_level": "raid5f", 00:16:10.324 "superblock": false, 00:16:10.324 "num_base_bdevs": 3, 00:16:10.324 "num_base_bdevs_discovered": 3, 00:16:10.324 "num_base_bdevs_operational": 3, 00:16:10.324 "process": { 00:16:10.324 "type": "rebuild", 00:16:10.324 "target": "spare", 00:16:10.324 "progress": { 00:16:10.324 "blocks": 69632, 00:16:10.324 "percent": 53 00:16:10.324 } 00:16:10.324 }, 00:16:10.324 "base_bdevs_list": [ 00:16:10.324 { 00:16:10.324 "name": "spare", 00:16:10.324 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:10.324 "is_configured": true, 00:16:10.324 "data_offset": 0, 00:16:10.324 "data_size": 65536 00:16:10.324 }, 00:16:10.324 { 00:16:10.324 "name": "BaseBdev2", 00:16:10.324 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:10.324 "is_configured": true, 00:16:10.324 "data_offset": 0, 00:16:10.324 "data_size": 65536 00:16:10.324 }, 00:16:10.324 { 00:16:10.324 "name": "BaseBdev3", 00:16:10.324 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:10.324 "is_configured": true, 00:16:10.324 "data_offset": 0, 00:16:10.324 "data_size": 65536 00:16:10.324 } 00:16:10.324 ] 00:16:10.324 }' 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.324 18:13:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.263 "name": "raid_bdev1", 00:16:11.263 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:11.263 "strip_size_kb": 64, 00:16:11.263 "state": "online", 00:16:11.263 "raid_level": "raid5f", 00:16:11.263 "superblock": false, 00:16:11.263 "num_base_bdevs": 3, 00:16:11.263 "num_base_bdevs_discovered": 3, 00:16:11.263 "num_base_bdevs_operational": 3, 00:16:11.263 "process": { 00:16:11.263 "type": "rebuild", 00:16:11.263 "target": "spare", 00:16:11.263 "progress": { 00:16:11.263 "blocks": 92160, 00:16:11.263 "percent": 70 00:16:11.263 } 00:16:11.263 }, 00:16:11.263 "base_bdevs_list": [ 00:16:11.263 { 00:16:11.263 "name": "spare", 00:16:11.263 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:11.263 "is_configured": true, 00:16:11.263 "data_offset": 0, 00:16:11.263 "data_size": 65536 00:16:11.263 }, 00:16:11.263 { 00:16:11.263 "name": "BaseBdev2", 00:16:11.263 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:11.263 "is_configured": true, 00:16:11.263 "data_offset": 0, 00:16:11.263 "data_size": 65536 00:16:11.263 }, 00:16:11.263 { 00:16:11.263 "name": "BaseBdev3", 00:16:11.263 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:11.263 "is_configured": true, 00:16:11.263 "data_offset": 0, 00:16:11.263 "data_size": 65536 00:16:11.263 } 00:16:11.263 ] 00:16:11.263 }' 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.263 18:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.642 "name": "raid_bdev1", 00:16:12.642 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:12.642 "strip_size_kb": 64, 00:16:12.642 "state": "online", 00:16:12.642 "raid_level": "raid5f", 00:16:12.642 "superblock": false, 00:16:12.642 "num_base_bdevs": 3, 00:16:12.642 "num_base_bdevs_discovered": 3, 00:16:12.642 "num_base_bdevs_operational": 3, 00:16:12.642 "process": { 00:16:12.642 "type": "rebuild", 00:16:12.642 "target": "spare", 00:16:12.642 "progress": { 00:16:12.642 "blocks": 114688, 00:16:12.642 "percent": 87 00:16:12.642 } 00:16:12.642 }, 00:16:12.642 "base_bdevs_list": [ 00:16:12.642 { 00:16:12.642 "name": "spare", 00:16:12.642 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:12.642 "is_configured": true, 00:16:12.642 "data_offset": 0, 00:16:12.642 "data_size": 65536 00:16:12.642 }, 00:16:12.642 { 00:16:12.642 "name": "BaseBdev2", 00:16:12.642 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:12.642 "is_configured": true, 00:16:12.642 "data_offset": 0, 00:16:12.642 "data_size": 65536 00:16:12.642 }, 00:16:12.642 { 00:16:12.642 "name": "BaseBdev3", 00:16:12.642 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:12.642 "is_configured": true, 00:16:12.642 "data_offset": 0, 00:16:12.642 "data_size": 65536 00:16:12.642 } 00:16:12.642 ] 00:16:12.642 }' 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.642 18:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.211 [2024-12-06 18:13:25.137042] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:13.211 [2024-12-06 18:13:25.137305] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:13.211 [2024-12-06 18:13:25.137371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.470 "name": "raid_bdev1", 00:16:13.470 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:13.470 "strip_size_kb": 64, 00:16:13.470 "state": "online", 00:16:13.470 "raid_level": "raid5f", 00:16:13.470 "superblock": false, 00:16:13.470 "num_base_bdevs": 3, 00:16:13.470 "num_base_bdevs_discovered": 3, 00:16:13.470 "num_base_bdevs_operational": 3, 00:16:13.470 "base_bdevs_list": [ 00:16:13.470 { 00:16:13.470 "name": "spare", 00:16:13.470 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:13.470 "is_configured": true, 00:16:13.470 "data_offset": 0, 00:16:13.470 "data_size": 65536 00:16:13.470 }, 00:16:13.470 { 00:16:13.470 "name": "BaseBdev2", 00:16:13.470 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:13.470 "is_configured": true, 00:16:13.470 "data_offset": 0, 00:16:13.470 "data_size": 65536 00:16:13.470 }, 00:16:13.470 { 00:16:13.470 "name": "BaseBdev3", 00:16:13.470 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:13.470 "is_configured": true, 00:16:13.470 "data_offset": 0, 00:16:13.470 "data_size": 65536 00:16:13.470 } 00:16:13.470 ] 00:16:13.470 }' 00:16:13.470 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.729 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.729 "name": "raid_bdev1", 00:16:13.729 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:13.729 "strip_size_kb": 64, 00:16:13.729 "state": "online", 00:16:13.729 "raid_level": "raid5f", 00:16:13.729 "superblock": false, 00:16:13.729 "num_base_bdevs": 3, 00:16:13.729 "num_base_bdevs_discovered": 3, 00:16:13.729 "num_base_bdevs_operational": 3, 00:16:13.729 "base_bdevs_list": [ 00:16:13.729 { 00:16:13.729 "name": "spare", 00:16:13.729 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:13.729 "is_configured": true, 00:16:13.729 "data_offset": 0, 00:16:13.729 "data_size": 65536 00:16:13.729 }, 00:16:13.729 { 00:16:13.729 "name": "BaseBdev2", 00:16:13.729 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:13.729 "is_configured": true, 00:16:13.729 "data_offset": 0, 00:16:13.729 "data_size": 65536 00:16:13.729 }, 00:16:13.729 { 00:16:13.729 "name": "BaseBdev3", 00:16:13.729 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:13.729 "is_configured": true, 00:16:13.729 "data_offset": 0, 00:16:13.729 "data_size": 65536 00:16:13.729 } 00:16:13.729 ] 00:16:13.729 }' 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.730 "name": "raid_bdev1", 00:16:13.730 "uuid": "2862901f-4070-47cf-b0d2-6e557bb525ed", 00:16:13.730 "strip_size_kb": 64, 00:16:13.730 "state": "online", 00:16:13.730 "raid_level": "raid5f", 00:16:13.730 "superblock": false, 00:16:13.730 "num_base_bdevs": 3, 00:16:13.730 "num_base_bdevs_discovered": 3, 00:16:13.730 "num_base_bdevs_operational": 3, 00:16:13.730 "base_bdevs_list": [ 00:16:13.730 { 00:16:13.730 "name": "spare", 00:16:13.730 "uuid": "621f7845-4319-51bf-a88b-eb54abddced8", 00:16:13.730 "is_configured": true, 00:16:13.730 "data_offset": 0, 00:16:13.730 "data_size": 65536 00:16:13.730 }, 00:16:13.730 { 00:16:13.730 "name": "BaseBdev2", 00:16:13.730 "uuid": "f8185f88-5d3a-541f-8faf-7f1d60ecc8b2", 00:16:13.730 "is_configured": true, 00:16:13.730 "data_offset": 0, 00:16:13.730 "data_size": 65536 00:16:13.730 }, 00:16:13.730 { 00:16:13.730 "name": "BaseBdev3", 00:16:13.730 "uuid": "6e05ccdb-24e8-5233-a6a8-4ae30ecdd494", 00:16:13.730 "is_configured": true, 00:16:13.730 "data_offset": 0, 00:16:13.730 "data_size": 65536 00:16:13.730 } 00:16:13.730 ] 00:16:13.730 }' 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.730 18:13:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.300 [2024-12-06 18:13:26.298111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.300 [2024-12-06 18:13:26.298244] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.300 [2024-12-06 18:13:26.298405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.300 [2024-12-06 18:13:26.298551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.300 [2024-12-06 18:13:26.298622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.300 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:14.559 /dev/nbd0 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:14.559 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.559 1+0 records in 00:16:14.559 1+0 records out 00:16:14.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573579 s, 7.1 MB/s 00:16:14.560 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.560 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:14.560 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.560 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:14.560 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:14.560 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:14.560 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.560 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:14.819 /dev/nbd1 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.819 1+0 records in 00:16:14.819 1+0 records out 00:16:14.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486493 s, 8.4 MB/s 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.819 18:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:15.079 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:15.079 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.079 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.079 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.079 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:15.079 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.079 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.339 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82132 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82132 ']' 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82132 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.600 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82132 00:16:15.860 killing process with pid 82132 00:16:15.860 Received shutdown signal, test time was about 60.000000 seconds 00:16:15.860 00:16:15.860 Latency(us) 00:16:15.860 [2024-12-06T18:13:28.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.860 [2024-12-06T18:13:28.028Z] =================================================================================================================== 00:16:15.860 [2024-12-06T18:13:28.028Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:15.860 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.860 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.860 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82132' 00:16:15.860 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82132 00:16:15.860 18:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82132 00:16:15.860 [2024-12-06 18:13:27.770167] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.121 [2024-12-06 18:13:28.241767] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:17.497 00:16:17.497 real 0m16.089s 00:16:17.497 user 0m19.796s 00:16:17.497 sys 0m2.172s 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.497 ************************************ 00:16:17.497 END TEST raid5f_rebuild_test 00:16:17.497 ************************************ 00:16:17.497 18:13:29 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:17.497 18:13:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:17.497 18:13:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.497 18:13:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.497 ************************************ 00:16:17.497 START TEST raid5f_rebuild_test_sb 00:16:17.497 ************************************ 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:17.497 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82581 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82581 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82581 ']' 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.498 18:13:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.756 [2024-12-06 18:13:29.743649] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:16:17.756 [2024-12-06 18:13:29.743813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82581 ] 00:16:17.756 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:17.756 Zero copy mechanism will not be used. 00:16:17.756 [2024-12-06 18:13:29.909601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.015 [2024-12-06 18:13:30.049371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.274 [2024-12-06 18:13:30.291241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.274 [2024-12-06 18:13:30.291327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.533 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.533 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:18.534 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.534 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:18.534 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.534 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 BaseBdev1_malloc 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 [2024-12-06 18:13:30.719498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:18.793 [2024-12-06 18:13:30.719593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.793 [2024-12-06 18:13:30.719633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:18.793 [2024-12-06 18:13:30.719649] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.793 [2024-12-06 18:13:30.722316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.793 [2024-12-06 18:13:30.722365] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:18.793 BaseBdev1 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 BaseBdev2_malloc 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 [2024-12-06 18:13:30.781944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:18.793 [2024-12-06 18:13:30.782038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.793 [2024-12-06 18:13:30.782085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:18.793 [2024-12-06 18:13:30.782100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.793 [2024-12-06 18:13:30.784712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.793 [2024-12-06 18:13:30.784763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:18.793 BaseBdev2 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 BaseBdev3_malloc 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 [2024-12-06 18:13:30.870274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:18.793 [2024-12-06 18:13:30.870369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.793 [2024-12-06 18:13:30.870401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:18.793 [2024-12-06 18:13:30.870415] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.793 [2024-12-06 18:13:30.873134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.793 [2024-12-06 18:13:30.873204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:18.793 BaseBdev3 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 spare_malloc 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 spare_delay 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 [2024-12-06 18:13:30.943643] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:18.793 [2024-12-06 18:13:30.943734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.793 [2024-12-06 18:13:30.943764] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:18.793 [2024-12-06 18:13:30.943778] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.793 [2024-12-06 18:13:30.946446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.793 [2024-12-06 18:13:30.946507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:18.793 spare 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.793 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.793 [2024-12-06 18:13:30.955772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.793 [2024-12-06 18:13:30.958162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.793 [2024-12-06 18:13:30.958262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.793 [2024-12-06 18:13:30.958509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:18.793 [2024-12-06 18:13:30.958533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:18.793 [2024-12-06 18:13:30.958879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:19.194 [2024-12-06 18:13:30.966003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:19.194 [2024-12-06 18:13:30.966059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:19.194 [2024-12-06 18:13:30.966423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.194 18:13:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.194 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.194 "name": "raid_bdev1", 00:16:19.194 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:19.194 "strip_size_kb": 64, 00:16:19.194 "state": "online", 00:16:19.194 "raid_level": "raid5f", 00:16:19.194 "superblock": true, 00:16:19.194 "num_base_bdevs": 3, 00:16:19.194 "num_base_bdevs_discovered": 3, 00:16:19.194 "num_base_bdevs_operational": 3, 00:16:19.194 "base_bdevs_list": [ 00:16:19.194 { 00:16:19.194 "name": "BaseBdev1", 00:16:19.194 "uuid": "0d4810db-526f-5ced-be74-3f5b52f33f76", 00:16:19.194 "is_configured": true, 00:16:19.194 "data_offset": 2048, 00:16:19.194 "data_size": 63488 00:16:19.194 }, 00:16:19.194 { 00:16:19.194 "name": "BaseBdev2", 00:16:19.194 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:19.194 "is_configured": true, 00:16:19.194 "data_offset": 2048, 00:16:19.194 "data_size": 63488 00:16:19.194 }, 00:16:19.194 { 00:16:19.194 "name": "BaseBdev3", 00:16:19.194 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:19.194 "is_configured": true, 00:16:19.194 "data_offset": 2048, 00:16:19.194 "data_size": 63488 00:16:19.194 } 00:16:19.194 ] 00:16:19.194 }' 00:16:19.194 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.194 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.454 [2024-12-06 18:13:31.425555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.454 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:19.714 [2024-12-06 18:13:31.736855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:19.714 /dev/nbd0 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.714 1+0 records in 00:16:19.714 1+0 records out 00:16:19.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433391 s, 9.5 MB/s 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:19.714 18:13:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:20.283 496+0 records in 00:16:20.283 496+0 records out 00:16:20.283 65011712 bytes (65 MB, 62 MiB) copied, 0.482689 s, 135 MB/s 00:16:20.283 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:20.283 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.283 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:20.283 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.283 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:20.283 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.283 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:20.543 [2024-12-06 18:13:32.543775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.543 [2024-12-06 18:13:32.577175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.543 "name": "raid_bdev1", 00:16:20.543 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:20.543 "strip_size_kb": 64, 00:16:20.543 "state": "online", 00:16:20.543 "raid_level": "raid5f", 00:16:20.543 "superblock": true, 00:16:20.543 "num_base_bdevs": 3, 00:16:20.543 "num_base_bdevs_discovered": 2, 00:16:20.543 "num_base_bdevs_operational": 2, 00:16:20.543 "base_bdevs_list": [ 00:16:20.543 { 00:16:20.543 "name": null, 00:16:20.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.543 "is_configured": false, 00:16:20.543 "data_offset": 0, 00:16:20.543 "data_size": 63488 00:16:20.543 }, 00:16:20.543 { 00:16:20.543 "name": "BaseBdev2", 00:16:20.543 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:20.543 "is_configured": true, 00:16:20.543 "data_offset": 2048, 00:16:20.543 "data_size": 63488 00:16:20.543 }, 00:16:20.543 { 00:16:20.543 "name": "BaseBdev3", 00:16:20.543 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:20.543 "is_configured": true, 00:16:20.543 "data_offset": 2048, 00:16:20.543 "data_size": 63488 00:16:20.543 } 00:16:20.543 ] 00:16:20.543 }' 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.543 18:13:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.112 18:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:21.112 18:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.112 18:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.112 [2024-12-06 18:13:33.072360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.112 [2024-12-06 18:13:33.093398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:21.112 18:13:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.112 18:13:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:21.112 [2024-12-06 18:13:33.104116] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.052 "name": "raid_bdev1", 00:16:22.052 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:22.052 "strip_size_kb": 64, 00:16:22.052 "state": "online", 00:16:22.052 "raid_level": "raid5f", 00:16:22.052 "superblock": true, 00:16:22.052 "num_base_bdevs": 3, 00:16:22.052 "num_base_bdevs_discovered": 3, 00:16:22.052 "num_base_bdevs_operational": 3, 00:16:22.052 "process": { 00:16:22.052 "type": "rebuild", 00:16:22.052 "target": "spare", 00:16:22.052 "progress": { 00:16:22.052 "blocks": 18432, 00:16:22.052 "percent": 14 00:16:22.052 } 00:16:22.052 }, 00:16:22.052 "base_bdevs_list": [ 00:16:22.052 { 00:16:22.052 "name": "spare", 00:16:22.052 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:22.052 "is_configured": true, 00:16:22.052 "data_offset": 2048, 00:16:22.052 "data_size": 63488 00:16:22.052 }, 00:16:22.052 { 00:16:22.052 "name": "BaseBdev2", 00:16:22.052 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:22.052 "is_configured": true, 00:16:22.052 "data_offset": 2048, 00:16:22.052 "data_size": 63488 00:16:22.052 }, 00:16:22.052 { 00:16:22.052 "name": "BaseBdev3", 00:16:22.052 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:22.052 "is_configured": true, 00:16:22.052 "data_offset": 2048, 00:16:22.052 "data_size": 63488 00:16:22.052 } 00:16:22.052 ] 00:16:22.052 }' 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.052 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.052 [2024-12-06 18:13:34.216570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.052 [2024-12-06 18:13:34.216835] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:22.052 [2024-12-06 18:13:34.216901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.052 [2024-12-06 18:13:34.216926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.052 [2024-12-06 18:13:34.216937] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.314 "name": "raid_bdev1", 00:16:22.314 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:22.314 "strip_size_kb": 64, 00:16:22.314 "state": "online", 00:16:22.314 "raid_level": "raid5f", 00:16:22.314 "superblock": true, 00:16:22.314 "num_base_bdevs": 3, 00:16:22.314 "num_base_bdevs_discovered": 2, 00:16:22.314 "num_base_bdevs_operational": 2, 00:16:22.314 "base_bdevs_list": [ 00:16:22.314 { 00:16:22.314 "name": null, 00:16:22.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.314 "is_configured": false, 00:16:22.314 "data_offset": 0, 00:16:22.314 "data_size": 63488 00:16:22.314 }, 00:16:22.314 { 00:16:22.314 "name": "BaseBdev2", 00:16:22.314 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:22.314 "is_configured": true, 00:16:22.314 "data_offset": 2048, 00:16:22.314 "data_size": 63488 00:16:22.314 }, 00:16:22.314 { 00:16:22.314 "name": "BaseBdev3", 00:16:22.314 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:22.314 "is_configured": true, 00:16:22.314 "data_offset": 2048, 00:16:22.314 "data_size": 63488 00:16:22.314 } 00:16:22.314 ] 00:16:22.314 }' 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.314 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.572 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.830 "name": "raid_bdev1", 00:16:22.830 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:22.830 "strip_size_kb": 64, 00:16:22.830 "state": "online", 00:16:22.830 "raid_level": "raid5f", 00:16:22.830 "superblock": true, 00:16:22.830 "num_base_bdevs": 3, 00:16:22.830 "num_base_bdevs_discovered": 2, 00:16:22.830 "num_base_bdevs_operational": 2, 00:16:22.830 "base_bdevs_list": [ 00:16:22.830 { 00:16:22.830 "name": null, 00:16:22.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.830 "is_configured": false, 00:16:22.830 "data_offset": 0, 00:16:22.830 "data_size": 63488 00:16:22.830 }, 00:16:22.830 { 00:16:22.830 "name": "BaseBdev2", 00:16:22.830 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:22.830 "is_configured": true, 00:16:22.830 "data_offset": 2048, 00:16:22.830 "data_size": 63488 00:16:22.830 }, 00:16:22.830 { 00:16:22.830 "name": "BaseBdev3", 00:16:22.830 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:22.830 "is_configured": true, 00:16:22.830 "data_offset": 2048, 00:16:22.830 "data_size": 63488 00:16:22.830 } 00:16:22.830 ] 00:16:22.830 }' 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.830 [2024-12-06 18:13:34.842556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.830 [2024-12-06 18:13:34.862319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.830 18:13:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:22.830 [2024-12-06 18:13:34.872261] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.763 "name": "raid_bdev1", 00:16:23.763 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:23.763 "strip_size_kb": 64, 00:16:23.763 "state": "online", 00:16:23.763 "raid_level": "raid5f", 00:16:23.763 "superblock": true, 00:16:23.763 "num_base_bdevs": 3, 00:16:23.763 "num_base_bdevs_discovered": 3, 00:16:23.763 "num_base_bdevs_operational": 3, 00:16:23.763 "process": { 00:16:23.763 "type": "rebuild", 00:16:23.763 "target": "spare", 00:16:23.763 "progress": { 00:16:23.763 "blocks": 20480, 00:16:23.763 "percent": 16 00:16:23.763 } 00:16:23.763 }, 00:16:23.763 "base_bdevs_list": [ 00:16:23.763 { 00:16:23.763 "name": "spare", 00:16:23.763 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:23.763 "is_configured": true, 00:16:23.763 "data_offset": 2048, 00:16:23.763 "data_size": 63488 00:16:23.763 }, 00:16:23.763 { 00:16:23.763 "name": "BaseBdev2", 00:16:23.763 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:23.763 "is_configured": true, 00:16:23.763 "data_offset": 2048, 00:16:23.763 "data_size": 63488 00:16:23.763 }, 00:16:23.763 { 00:16:23.763 "name": "BaseBdev3", 00:16:23.763 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:23.763 "is_configured": true, 00:16:23.763 "data_offset": 2048, 00:16:23.763 "data_size": 63488 00:16:23.763 } 00:16:23.763 ] 00:16:23.763 }' 00:16:23.763 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.021 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.021 18:13:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.021 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.021 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:24.021 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:24.021 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:24.021 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:24.021 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:24.021 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=590 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.022 "name": "raid_bdev1", 00:16:24.022 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:24.022 "strip_size_kb": 64, 00:16:24.022 "state": "online", 00:16:24.022 "raid_level": "raid5f", 00:16:24.022 "superblock": true, 00:16:24.022 "num_base_bdevs": 3, 00:16:24.022 "num_base_bdevs_discovered": 3, 00:16:24.022 "num_base_bdevs_operational": 3, 00:16:24.022 "process": { 00:16:24.022 "type": "rebuild", 00:16:24.022 "target": "spare", 00:16:24.022 "progress": { 00:16:24.022 "blocks": 22528, 00:16:24.022 "percent": 17 00:16:24.022 } 00:16:24.022 }, 00:16:24.022 "base_bdevs_list": [ 00:16:24.022 { 00:16:24.022 "name": "spare", 00:16:24.022 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:24.022 "is_configured": true, 00:16:24.022 "data_offset": 2048, 00:16:24.022 "data_size": 63488 00:16:24.022 }, 00:16:24.022 { 00:16:24.022 "name": "BaseBdev2", 00:16:24.022 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:24.022 "is_configured": true, 00:16:24.022 "data_offset": 2048, 00:16:24.022 "data_size": 63488 00:16:24.022 }, 00:16:24.022 { 00:16:24.022 "name": "BaseBdev3", 00:16:24.022 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:24.022 "is_configured": true, 00:16:24.022 "data_offset": 2048, 00:16:24.022 "data_size": 63488 00:16:24.022 } 00:16:24.022 ] 00:16:24.022 }' 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.022 18:13:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.413 "name": "raid_bdev1", 00:16:25.413 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:25.413 "strip_size_kb": 64, 00:16:25.413 "state": "online", 00:16:25.413 "raid_level": "raid5f", 00:16:25.413 "superblock": true, 00:16:25.413 "num_base_bdevs": 3, 00:16:25.413 "num_base_bdevs_discovered": 3, 00:16:25.413 "num_base_bdevs_operational": 3, 00:16:25.413 "process": { 00:16:25.413 "type": "rebuild", 00:16:25.413 "target": "spare", 00:16:25.413 "progress": { 00:16:25.413 "blocks": 45056, 00:16:25.413 "percent": 35 00:16:25.413 } 00:16:25.413 }, 00:16:25.413 "base_bdevs_list": [ 00:16:25.413 { 00:16:25.413 "name": "spare", 00:16:25.413 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:25.413 "is_configured": true, 00:16:25.413 "data_offset": 2048, 00:16:25.413 "data_size": 63488 00:16:25.413 }, 00:16:25.413 { 00:16:25.413 "name": "BaseBdev2", 00:16:25.413 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:25.413 "is_configured": true, 00:16:25.413 "data_offset": 2048, 00:16:25.413 "data_size": 63488 00:16:25.413 }, 00:16:25.413 { 00:16:25.413 "name": "BaseBdev3", 00:16:25.413 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:25.413 "is_configured": true, 00:16:25.413 "data_offset": 2048, 00:16:25.413 "data_size": 63488 00:16:25.413 } 00:16:25.413 ] 00:16:25.413 }' 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.413 18:13:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.393 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.393 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.394 "name": "raid_bdev1", 00:16:26.394 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:26.394 "strip_size_kb": 64, 00:16:26.394 "state": "online", 00:16:26.394 "raid_level": "raid5f", 00:16:26.394 "superblock": true, 00:16:26.394 "num_base_bdevs": 3, 00:16:26.394 "num_base_bdevs_discovered": 3, 00:16:26.394 "num_base_bdevs_operational": 3, 00:16:26.394 "process": { 00:16:26.394 "type": "rebuild", 00:16:26.394 "target": "spare", 00:16:26.394 "progress": { 00:16:26.394 "blocks": 67584, 00:16:26.394 "percent": 53 00:16:26.394 } 00:16:26.394 }, 00:16:26.394 "base_bdevs_list": [ 00:16:26.394 { 00:16:26.394 "name": "spare", 00:16:26.394 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:26.394 "is_configured": true, 00:16:26.394 "data_offset": 2048, 00:16:26.394 "data_size": 63488 00:16:26.394 }, 00:16:26.394 { 00:16:26.394 "name": "BaseBdev2", 00:16:26.394 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:26.394 "is_configured": true, 00:16:26.394 "data_offset": 2048, 00:16:26.394 "data_size": 63488 00:16:26.394 }, 00:16:26.394 { 00:16:26.394 "name": "BaseBdev3", 00:16:26.394 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:26.394 "is_configured": true, 00:16:26.394 "data_offset": 2048, 00:16:26.394 "data_size": 63488 00:16:26.394 } 00:16:26.394 ] 00:16:26.394 }' 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.394 18:13:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.331 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.590 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.590 "name": "raid_bdev1", 00:16:27.590 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:27.590 "strip_size_kb": 64, 00:16:27.590 "state": "online", 00:16:27.590 "raid_level": "raid5f", 00:16:27.590 "superblock": true, 00:16:27.590 "num_base_bdevs": 3, 00:16:27.590 "num_base_bdevs_discovered": 3, 00:16:27.590 "num_base_bdevs_operational": 3, 00:16:27.590 "process": { 00:16:27.590 "type": "rebuild", 00:16:27.590 "target": "spare", 00:16:27.590 "progress": { 00:16:27.590 "blocks": 92160, 00:16:27.590 "percent": 72 00:16:27.590 } 00:16:27.590 }, 00:16:27.590 "base_bdevs_list": [ 00:16:27.590 { 00:16:27.590 "name": "spare", 00:16:27.590 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:27.590 "is_configured": true, 00:16:27.590 "data_offset": 2048, 00:16:27.590 "data_size": 63488 00:16:27.590 }, 00:16:27.590 { 00:16:27.590 "name": "BaseBdev2", 00:16:27.590 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:27.590 "is_configured": true, 00:16:27.590 "data_offset": 2048, 00:16:27.590 "data_size": 63488 00:16:27.590 }, 00:16:27.590 { 00:16:27.590 "name": "BaseBdev3", 00:16:27.590 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:27.590 "is_configured": true, 00:16:27.590 "data_offset": 2048, 00:16:27.590 "data_size": 63488 00:16:27.590 } 00:16:27.590 ] 00:16:27.590 }' 00:16:27.590 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.590 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.590 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.590 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.590 18:13:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.530 "name": "raid_bdev1", 00:16:28.530 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:28.530 "strip_size_kb": 64, 00:16:28.530 "state": "online", 00:16:28.530 "raid_level": "raid5f", 00:16:28.530 "superblock": true, 00:16:28.530 "num_base_bdevs": 3, 00:16:28.530 "num_base_bdevs_discovered": 3, 00:16:28.530 "num_base_bdevs_operational": 3, 00:16:28.530 "process": { 00:16:28.530 "type": "rebuild", 00:16:28.530 "target": "spare", 00:16:28.530 "progress": { 00:16:28.530 "blocks": 114688, 00:16:28.530 "percent": 90 00:16:28.530 } 00:16:28.530 }, 00:16:28.530 "base_bdevs_list": [ 00:16:28.530 { 00:16:28.530 "name": "spare", 00:16:28.530 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:28.530 "is_configured": true, 00:16:28.530 "data_offset": 2048, 00:16:28.530 "data_size": 63488 00:16:28.530 }, 00:16:28.530 { 00:16:28.530 "name": "BaseBdev2", 00:16:28.530 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:28.530 "is_configured": true, 00:16:28.530 "data_offset": 2048, 00:16:28.530 "data_size": 63488 00:16:28.530 }, 00:16:28.530 { 00:16:28.530 "name": "BaseBdev3", 00:16:28.530 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:28.530 "is_configured": true, 00:16:28.530 "data_offset": 2048, 00:16:28.530 "data_size": 63488 00:16:28.530 } 00:16:28.530 ] 00:16:28.530 }' 00:16:28.530 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.790 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.790 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.790 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.790 18:13:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.050 [2024-12-06 18:13:41.140115] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:29.050 [2024-12-06 18:13:41.140268] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:29.050 [2024-12-06 18:13:41.140445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.619 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.878 "name": "raid_bdev1", 00:16:29.878 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:29.878 "strip_size_kb": 64, 00:16:29.878 "state": "online", 00:16:29.878 "raid_level": "raid5f", 00:16:29.878 "superblock": true, 00:16:29.878 "num_base_bdevs": 3, 00:16:29.878 "num_base_bdevs_discovered": 3, 00:16:29.878 "num_base_bdevs_operational": 3, 00:16:29.878 "base_bdevs_list": [ 00:16:29.878 { 00:16:29.878 "name": "spare", 00:16:29.878 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:29.878 "is_configured": true, 00:16:29.878 "data_offset": 2048, 00:16:29.878 "data_size": 63488 00:16:29.878 }, 00:16:29.878 { 00:16:29.878 "name": "BaseBdev2", 00:16:29.878 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:29.878 "is_configured": true, 00:16:29.878 "data_offset": 2048, 00:16:29.878 "data_size": 63488 00:16:29.878 }, 00:16:29.878 { 00:16:29.878 "name": "BaseBdev3", 00:16:29.878 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:29.878 "is_configured": true, 00:16:29.878 "data_offset": 2048, 00:16:29.878 "data_size": 63488 00:16:29.878 } 00:16:29.878 ] 00:16:29.878 }' 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.878 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.879 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.879 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.879 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.879 "name": "raid_bdev1", 00:16:29.879 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:29.879 "strip_size_kb": 64, 00:16:29.879 "state": "online", 00:16:29.879 "raid_level": "raid5f", 00:16:29.879 "superblock": true, 00:16:29.879 "num_base_bdevs": 3, 00:16:29.879 "num_base_bdevs_discovered": 3, 00:16:29.879 "num_base_bdevs_operational": 3, 00:16:29.879 "base_bdevs_list": [ 00:16:29.879 { 00:16:29.879 "name": "spare", 00:16:29.879 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:29.879 "is_configured": true, 00:16:29.879 "data_offset": 2048, 00:16:29.879 "data_size": 63488 00:16:29.879 }, 00:16:29.879 { 00:16:29.879 "name": "BaseBdev2", 00:16:29.879 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:29.879 "is_configured": true, 00:16:29.879 "data_offset": 2048, 00:16:29.879 "data_size": 63488 00:16:29.879 }, 00:16:29.879 { 00:16:29.879 "name": "BaseBdev3", 00:16:29.879 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:29.879 "is_configured": true, 00:16:29.879 "data_offset": 2048, 00:16:29.879 "data_size": 63488 00:16:29.879 } 00:16:29.879 ] 00:16:29.879 }' 00:16:29.879 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.879 18:13:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.879 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.138 "name": "raid_bdev1", 00:16:30.138 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:30.138 "strip_size_kb": 64, 00:16:30.138 "state": "online", 00:16:30.138 "raid_level": "raid5f", 00:16:30.138 "superblock": true, 00:16:30.138 "num_base_bdevs": 3, 00:16:30.138 "num_base_bdevs_discovered": 3, 00:16:30.138 "num_base_bdevs_operational": 3, 00:16:30.138 "base_bdevs_list": [ 00:16:30.138 { 00:16:30.138 "name": "spare", 00:16:30.138 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:30.138 "is_configured": true, 00:16:30.138 "data_offset": 2048, 00:16:30.138 "data_size": 63488 00:16:30.138 }, 00:16:30.138 { 00:16:30.138 "name": "BaseBdev2", 00:16:30.138 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:30.138 "is_configured": true, 00:16:30.138 "data_offset": 2048, 00:16:30.138 "data_size": 63488 00:16:30.138 }, 00:16:30.138 { 00:16:30.138 "name": "BaseBdev3", 00:16:30.138 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:30.138 "is_configured": true, 00:16:30.138 "data_offset": 2048, 00:16:30.138 "data_size": 63488 00:16:30.138 } 00:16:30.138 ] 00:16:30.138 }' 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.138 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.397 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:30.397 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.397 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.397 [2024-12-06 18:13:42.548154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.397 [2024-12-06 18:13:42.548204] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.397 [2024-12-06 18:13:42.548352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.397 [2024-12-06 18:13:42.548497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.397 [2024-12-06 18:13:42.548532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:30.397 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.397 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.397 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.397 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.397 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.656 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:30.965 /dev/nbd0 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.965 1+0 records in 00:16:30.965 1+0 records out 00:16:30.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448337 s, 9.1 MB/s 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.965 18:13:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:31.240 /dev/nbd1 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.240 1+0 records in 00:16:31.240 1+0 records out 00:16:31.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351624 s, 11.6 MB/s 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:31.240 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:31.499 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:31.499 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.499 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:31.499 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:31.499 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:31.499 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.499 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:31.757 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.757 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.757 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.757 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.757 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.757 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.757 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:31.757 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.758 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.758 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.016 [2024-12-06 18:13:43.976724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:32.016 [2024-12-06 18:13:43.976814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.016 [2024-12-06 18:13:43.976841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:32.016 [2024-12-06 18:13:43.976854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.016 [2024-12-06 18:13:43.979291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.016 [2024-12-06 18:13:43.979331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:32.016 [2024-12-06 18:13:43.979434] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:32.016 [2024-12-06 18:13:43.979497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.016 [2024-12-06 18:13:43.979692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.016 [2024-12-06 18:13:43.979825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:32.016 spare 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.016 18:13:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.016 [2024-12-06 18:13:44.079774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:32.016 [2024-12-06 18:13:44.079846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:32.016 [2024-12-06 18:13:44.080215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:32.016 [2024-12-06 18:13:44.086369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:32.016 [2024-12-06 18:13:44.086396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:32.016 [2024-12-06 18:13:44.086651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.016 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.016 "name": "raid_bdev1", 00:16:32.016 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:32.016 "strip_size_kb": 64, 00:16:32.016 "state": "online", 00:16:32.016 "raid_level": "raid5f", 00:16:32.016 "superblock": true, 00:16:32.016 "num_base_bdevs": 3, 00:16:32.016 "num_base_bdevs_discovered": 3, 00:16:32.016 "num_base_bdevs_operational": 3, 00:16:32.016 "base_bdevs_list": [ 00:16:32.016 { 00:16:32.016 "name": "spare", 00:16:32.017 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:32.017 "is_configured": true, 00:16:32.017 "data_offset": 2048, 00:16:32.017 "data_size": 63488 00:16:32.017 }, 00:16:32.017 { 00:16:32.017 "name": "BaseBdev2", 00:16:32.017 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:32.017 "is_configured": true, 00:16:32.017 "data_offset": 2048, 00:16:32.017 "data_size": 63488 00:16:32.017 }, 00:16:32.017 { 00:16:32.017 "name": "BaseBdev3", 00:16:32.017 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:32.017 "is_configured": true, 00:16:32.017 "data_offset": 2048, 00:16:32.017 "data_size": 63488 00:16:32.017 } 00:16:32.017 ] 00:16:32.017 }' 00:16:32.017 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.017 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.586 "name": "raid_bdev1", 00:16:32.586 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:32.586 "strip_size_kb": 64, 00:16:32.586 "state": "online", 00:16:32.586 "raid_level": "raid5f", 00:16:32.586 "superblock": true, 00:16:32.586 "num_base_bdevs": 3, 00:16:32.586 "num_base_bdevs_discovered": 3, 00:16:32.586 "num_base_bdevs_operational": 3, 00:16:32.586 "base_bdevs_list": [ 00:16:32.586 { 00:16:32.586 "name": "spare", 00:16:32.586 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:32.586 "is_configured": true, 00:16:32.586 "data_offset": 2048, 00:16:32.586 "data_size": 63488 00:16:32.586 }, 00:16:32.586 { 00:16:32.586 "name": "BaseBdev2", 00:16:32.586 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:32.586 "is_configured": true, 00:16:32.586 "data_offset": 2048, 00:16:32.586 "data_size": 63488 00:16:32.586 }, 00:16:32.586 { 00:16:32.586 "name": "BaseBdev3", 00:16:32.586 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:32.586 "is_configured": true, 00:16:32.586 "data_offset": 2048, 00:16:32.586 "data_size": 63488 00:16:32.586 } 00:16:32.586 ] 00:16:32.586 }' 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.586 [2024-12-06 18:13:44.716505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.586 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.846 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.846 "name": "raid_bdev1", 00:16:32.846 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:32.846 "strip_size_kb": 64, 00:16:32.846 "state": "online", 00:16:32.846 "raid_level": "raid5f", 00:16:32.846 "superblock": true, 00:16:32.846 "num_base_bdevs": 3, 00:16:32.846 "num_base_bdevs_discovered": 2, 00:16:32.846 "num_base_bdevs_operational": 2, 00:16:32.846 "base_bdevs_list": [ 00:16:32.846 { 00:16:32.846 "name": null, 00:16:32.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.846 "is_configured": false, 00:16:32.846 "data_offset": 0, 00:16:32.846 "data_size": 63488 00:16:32.846 }, 00:16:32.846 { 00:16:32.846 "name": "BaseBdev2", 00:16:32.846 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:32.846 "is_configured": true, 00:16:32.846 "data_offset": 2048, 00:16:32.846 "data_size": 63488 00:16:32.846 }, 00:16:32.846 { 00:16:32.846 "name": "BaseBdev3", 00:16:32.846 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:32.846 "is_configured": true, 00:16:32.846 "data_offset": 2048, 00:16:32.846 "data_size": 63488 00:16:32.846 } 00:16:32.846 ] 00:16:32.846 }' 00:16:32.846 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.846 18:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.105 18:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.105 18:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.105 18:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.105 [2024-12-06 18:13:45.123837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.105 [2024-12-06 18:13:45.124056] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:33.105 [2024-12-06 18:13:45.124097] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:33.105 [2024-12-06 18:13:45.124137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.105 [2024-12-06 18:13:45.140652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:33.105 18:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.105 18:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:33.105 [2024-12-06 18:13:45.148822] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.041 "name": "raid_bdev1", 00:16:34.041 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:34.041 "strip_size_kb": 64, 00:16:34.041 "state": "online", 00:16:34.041 "raid_level": "raid5f", 00:16:34.041 "superblock": true, 00:16:34.041 "num_base_bdevs": 3, 00:16:34.041 "num_base_bdevs_discovered": 3, 00:16:34.041 "num_base_bdevs_operational": 3, 00:16:34.041 "process": { 00:16:34.041 "type": "rebuild", 00:16:34.041 "target": "spare", 00:16:34.041 "progress": { 00:16:34.041 "blocks": 20480, 00:16:34.041 "percent": 16 00:16:34.041 } 00:16:34.041 }, 00:16:34.041 "base_bdevs_list": [ 00:16:34.041 { 00:16:34.041 "name": "spare", 00:16:34.041 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:34.041 "is_configured": true, 00:16:34.041 "data_offset": 2048, 00:16:34.041 "data_size": 63488 00:16:34.041 }, 00:16:34.041 { 00:16:34.041 "name": "BaseBdev2", 00:16:34.041 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:34.041 "is_configured": true, 00:16:34.041 "data_offset": 2048, 00:16:34.041 "data_size": 63488 00:16:34.041 }, 00:16:34.041 { 00:16:34.041 "name": "BaseBdev3", 00:16:34.041 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:34.041 "is_configured": true, 00:16:34.041 "data_offset": 2048, 00:16:34.041 "data_size": 63488 00:16:34.041 } 00:16:34.041 ] 00:16:34.041 }' 00:16:34.041 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.300 [2024-12-06 18:13:46.304378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.300 [2024-12-06 18:13:46.359935] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:34.300 [2024-12-06 18:13:46.360025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.300 [2024-12-06 18:13:46.360049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.300 [2024-12-06 18:13:46.360074] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.300 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.301 "name": "raid_bdev1", 00:16:34.301 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:34.301 "strip_size_kb": 64, 00:16:34.301 "state": "online", 00:16:34.301 "raid_level": "raid5f", 00:16:34.301 "superblock": true, 00:16:34.301 "num_base_bdevs": 3, 00:16:34.301 "num_base_bdevs_discovered": 2, 00:16:34.301 "num_base_bdevs_operational": 2, 00:16:34.301 "base_bdevs_list": [ 00:16:34.301 { 00:16:34.301 "name": null, 00:16:34.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.301 "is_configured": false, 00:16:34.301 "data_offset": 0, 00:16:34.301 "data_size": 63488 00:16:34.301 }, 00:16:34.301 { 00:16:34.301 "name": "BaseBdev2", 00:16:34.301 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:34.301 "is_configured": true, 00:16:34.301 "data_offset": 2048, 00:16:34.301 "data_size": 63488 00:16:34.301 }, 00:16:34.301 { 00:16:34.301 "name": "BaseBdev3", 00:16:34.301 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:34.301 "is_configured": true, 00:16:34.301 "data_offset": 2048, 00:16:34.301 "data_size": 63488 00:16:34.301 } 00:16:34.301 ] 00:16:34.301 }' 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.301 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.869 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.869 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.869 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.869 [2024-12-06 18:13:46.867815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.869 [2024-12-06 18:13:46.867896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.869 [2024-12-06 18:13:46.867921] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:34.869 [2024-12-06 18:13:46.867937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.869 [2024-12-06 18:13:46.868549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.869 [2024-12-06 18:13:46.868586] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.869 [2024-12-06 18:13:46.868727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.869 [2024-12-06 18:13:46.868759] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:34.869 [2024-12-06 18:13:46.868771] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:34.869 [2024-12-06 18:13:46.868803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.869 [2024-12-06 18:13:46.887180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:34.869 spare 00:16:34.869 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.869 18:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:34.869 [2024-12-06 18:13:46.896646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.805 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.806 "name": "raid_bdev1", 00:16:35.806 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:35.806 "strip_size_kb": 64, 00:16:35.806 "state": "online", 00:16:35.806 "raid_level": "raid5f", 00:16:35.806 "superblock": true, 00:16:35.806 "num_base_bdevs": 3, 00:16:35.806 "num_base_bdevs_discovered": 3, 00:16:35.806 "num_base_bdevs_operational": 3, 00:16:35.806 "process": { 00:16:35.806 "type": "rebuild", 00:16:35.806 "target": "spare", 00:16:35.806 "progress": { 00:16:35.806 "blocks": 20480, 00:16:35.806 "percent": 16 00:16:35.806 } 00:16:35.806 }, 00:16:35.806 "base_bdevs_list": [ 00:16:35.806 { 00:16:35.806 "name": "spare", 00:16:35.806 "uuid": "83e97f0c-907a-5428-b17e-20be527a56dd", 00:16:35.806 "is_configured": true, 00:16:35.806 "data_offset": 2048, 00:16:35.806 "data_size": 63488 00:16:35.806 }, 00:16:35.806 { 00:16:35.806 "name": "BaseBdev2", 00:16:35.806 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:35.806 "is_configured": true, 00:16:35.806 "data_offset": 2048, 00:16:35.806 "data_size": 63488 00:16:35.806 }, 00:16:35.806 { 00:16:35.806 "name": "BaseBdev3", 00:16:35.806 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:35.806 "is_configured": true, 00:16:35.806 "data_offset": 2048, 00:16:35.806 "data_size": 63488 00:16:35.806 } 00:16:35.806 ] 00:16:35.806 }' 00:16:35.806 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.065 18:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.065 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.066 [2024-12-06 18:13:48.056587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.066 [2024-12-06 18:13:48.108478] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:36.066 [2024-12-06 18:13:48.108550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.066 [2024-12-06 18:13:48.108574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.066 [2024-12-06 18:13:48.108584] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.066 "name": "raid_bdev1", 00:16:36.066 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:36.066 "strip_size_kb": 64, 00:16:36.066 "state": "online", 00:16:36.066 "raid_level": "raid5f", 00:16:36.066 "superblock": true, 00:16:36.066 "num_base_bdevs": 3, 00:16:36.066 "num_base_bdevs_discovered": 2, 00:16:36.066 "num_base_bdevs_operational": 2, 00:16:36.066 "base_bdevs_list": [ 00:16:36.066 { 00:16:36.066 "name": null, 00:16:36.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.066 "is_configured": false, 00:16:36.066 "data_offset": 0, 00:16:36.066 "data_size": 63488 00:16:36.066 }, 00:16:36.066 { 00:16:36.066 "name": "BaseBdev2", 00:16:36.066 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:36.066 "is_configured": true, 00:16:36.066 "data_offset": 2048, 00:16:36.066 "data_size": 63488 00:16:36.066 }, 00:16:36.066 { 00:16:36.066 "name": "BaseBdev3", 00:16:36.066 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:36.066 "is_configured": true, 00:16:36.066 "data_offset": 2048, 00:16:36.066 "data_size": 63488 00:16:36.066 } 00:16:36.066 ] 00:16:36.066 }' 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.066 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.649 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.649 "name": "raid_bdev1", 00:16:36.649 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:36.649 "strip_size_kb": 64, 00:16:36.649 "state": "online", 00:16:36.649 "raid_level": "raid5f", 00:16:36.649 "superblock": true, 00:16:36.650 "num_base_bdevs": 3, 00:16:36.650 "num_base_bdevs_discovered": 2, 00:16:36.650 "num_base_bdevs_operational": 2, 00:16:36.650 "base_bdevs_list": [ 00:16:36.650 { 00:16:36.650 "name": null, 00:16:36.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.650 "is_configured": false, 00:16:36.650 "data_offset": 0, 00:16:36.650 "data_size": 63488 00:16:36.650 }, 00:16:36.650 { 00:16:36.650 "name": "BaseBdev2", 00:16:36.650 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:36.650 "is_configured": true, 00:16:36.650 "data_offset": 2048, 00:16:36.650 "data_size": 63488 00:16:36.650 }, 00:16:36.650 { 00:16:36.650 "name": "BaseBdev3", 00:16:36.650 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:36.650 "is_configured": true, 00:16:36.650 "data_offset": 2048, 00:16:36.650 "data_size": 63488 00:16:36.650 } 00:16:36.650 ] 00:16:36.650 }' 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.650 [2024-12-06 18:13:48.743363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:36.650 [2024-12-06 18:13:48.743441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.650 [2024-12-06 18:13:48.743478] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:36.650 [2024-12-06 18:13:48.743512] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.650 [2024-12-06 18:13:48.744152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.650 [2024-12-06 18:13:48.744191] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:36.650 [2024-12-06 18:13:48.744306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:36.650 [2024-12-06 18:13:48.744334] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:36.650 [2024-12-06 18:13:48.744347] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:36.650 [2024-12-06 18:13:48.744379] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:36.650 BaseBdev1 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.650 18:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.592 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.850 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.850 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.850 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.850 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.850 "name": "raid_bdev1", 00:16:37.850 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:37.850 "strip_size_kb": 64, 00:16:37.850 "state": "online", 00:16:37.850 "raid_level": "raid5f", 00:16:37.850 "superblock": true, 00:16:37.850 "num_base_bdevs": 3, 00:16:37.850 "num_base_bdevs_discovered": 2, 00:16:37.850 "num_base_bdevs_operational": 2, 00:16:37.850 "base_bdevs_list": [ 00:16:37.850 { 00:16:37.850 "name": null, 00:16:37.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.850 "is_configured": false, 00:16:37.850 "data_offset": 0, 00:16:37.850 "data_size": 63488 00:16:37.850 }, 00:16:37.850 { 00:16:37.850 "name": "BaseBdev2", 00:16:37.850 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:37.850 "is_configured": true, 00:16:37.850 "data_offset": 2048, 00:16:37.850 "data_size": 63488 00:16:37.850 }, 00:16:37.850 { 00:16:37.850 "name": "BaseBdev3", 00:16:37.850 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:37.850 "is_configured": true, 00:16:37.850 "data_offset": 2048, 00:16:37.850 "data_size": 63488 00:16:37.850 } 00:16:37.850 ] 00:16:37.850 }' 00:16:37.850 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.850 18:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.108 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.108 "name": "raid_bdev1", 00:16:38.108 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:38.108 "strip_size_kb": 64, 00:16:38.108 "state": "online", 00:16:38.108 "raid_level": "raid5f", 00:16:38.108 "superblock": true, 00:16:38.108 "num_base_bdevs": 3, 00:16:38.108 "num_base_bdevs_discovered": 2, 00:16:38.108 "num_base_bdevs_operational": 2, 00:16:38.108 "base_bdevs_list": [ 00:16:38.108 { 00:16:38.108 "name": null, 00:16:38.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.108 "is_configured": false, 00:16:38.108 "data_offset": 0, 00:16:38.108 "data_size": 63488 00:16:38.108 }, 00:16:38.108 { 00:16:38.108 "name": "BaseBdev2", 00:16:38.108 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:38.108 "is_configured": true, 00:16:38.108 "data_offset": 2048, 00:16:38.108 "data_size": 63488 00:16:38.108 }, 00:16:38.108 { 00:16:38.108 "name": "BaseBdev3", 00:16:38.109 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:38.109 "is_configured": true, 00:16:38.109 "data_offset": 2048, 00:16:38.109 "data_size": 63488 00:16:38.109 } 00:16:38.109 ] 00:16:38.109 }' 00:16:38.109 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.367 [2024-12-06 18:13:50.348801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.367 [2024-12-06 18:13:50.349028] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:38.367 [2024-12-06 18:13:50.349057] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:38.367 request: 00:16:38.367 { 00:16:38.367 "base_bdev": "BaseBdev1", 00:16:38.367 "raid_bdev": "raid_bdev1", 00:16:38.367 "method": "bdev_raid_add_base_bdev", 00:16:38.367 "req_id": 1 00:16:38.367 } 00:16:38.367 Got JSON-RPC error response 00:16:38.367 response: 00:16:38.367 { 00:16:38.367 "code": -22, 00:16:38.367 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:38.367 } 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.367 18:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.302 "name": "raid_bdev1", 00:16:39.302 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:39.302 "strip_size_kb": 64, 00:16:39.302 "state": "online", 00:16:39.302 "raid_level": "raid5f", 00:16:39.302 "superblock": true, 00:16:39.302 "num_base_bdevs": 3, 00:16:39.302 "num_base_bdevs_discovered": 2, 00:16:39.302 "num_base_bdevs_operational": 2, 00:16:39.302 "base_bdevs_list": [ 00:16:39.302 { 00:16:39.302 "name": null, 00:16:39.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.302 "is_configured": false, 00:16:39.302 "data_offset": 0, 00:16:39.302 "data_size": 63488 00:16:39.302 }, 00:16:39.302 { 00:16:39.302 "name": "BaseBdev2", 00:16:39.302 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:39.302 "is_configured": true, 00:16:39.302 "data_offset": 2048, 00:16:39.302 "data_size": 63488 00:16:39.302 }, 00:16:39.302 { 00:16:39.302 "name": "BaseBdev3", 00:16:39.302 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:39.302 "is_configured": true, 00:16:39.302 "data_offset": 2048, 00:16:39.302 "data_size": 63488 00:16:39.302 } 00:16:39.302 ] 00:16:39.302 }' 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.302 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.869 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.870 "name": "raid_bdev1", 00:16:39.870 "uuid": "f0cb534d-8f2f-4ad0-89a5-754a64a752a7", 00:16:39.870 "strip_size_kb": 64, 00:16:39.870 "state": "online", 00:16:39.870 "raid_level": "raid5f", 00:16:39.870 "superblock": true, 00:16:39.870 "num_base_bdevs": 3, 00:16:39.870 "num_base_bdevs_discovered": 2, 00:16:39.870 "num_base_bdevs_operational": 2, 00:16:39.870 "base_bdevs_list": [ 00:16:39.870 { 00:16:39.870 "name": null, 00:16:39.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.870 "is_configured": false, 00:16:39.870 "data_offset": 0, 00:16:39.870 "data_size": 63488 00:16:39.870 }, 00:16:39.870 { 00:16:39.870 "name": "BaseBdev2", 00:16:39.870 "uuid": "16e1e90f-1cb6-5a69-acc7-00b25341717b", 00:16:39.870 "is_configured": true, 00:16:39.870 "data_offset": 2048, 00:16:39.870 "data_size": 63488 00:16:39.870 }, 00:16:39.870 { 00:16:39.870 "name": "BaseBdev3", 00:16:39.870 "uuid": "155b8c46-689c-58d0-8ec9-ced7e82ccd82", 00:16:39.870 "is_configured": true, 00:16:39.870 "data_offset": 2048, 00:16:39.870 "data_size": 63488 00:16:39.870 } 00:16:39.870 ] 00:16:39.870 }' 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82581 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82581 ']' 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82581 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82581 00:16:39.870 killing process with pid 82581 00:16:39.870 Received shutdown signal, test time was about 60.000000 seconds 00:16:39.870 00:16:39.870 Latency(us) 00:16:39.870 [2024-12-06T18:13:52.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.870 [2024-12-06T18:13:52.038Z] =================================================================================================================== 00:16:39.870 [2024-12-06T18:13:52.038Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82581' 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82581 00:16:39.870 18:13:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82581 00:16:39.870 [2024-12-06 18:13:51.967656] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.870 [2024-12-06 18:13:51.967813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.870 [2024-12-06 18:13:51.967905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.870 [2024-12-06 18:13:51.967927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:40.438 [2024-12-06 18:13:52.398578] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.820 18:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:41.820 00:16:41.820 real 0m23.952s 00:16:41.820 user 0m30.818s 00:16:41.820 sys 0m2.876s 00:16:41.820 18:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.820 18:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.820 ************************************ 00:16:41.820 END TEST raid5f_rebuild_test_sb 00:16:41.820 ************************************ 00:16:41.820 18:13:53 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:41.820 18:13:53 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:41.820 18:13:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:41.820 18:13:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.820 18:13:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.820 ************************************ 00:16:41.820 START TEST raid5f_state_function_test 00:16:41.820 ************************************ 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83338 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83338' 00:16:41.820 Process raid pid: 83338 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83338 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83338 ']' 00:16:41.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.820 18:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.820 [2024-12-06 18:13:53.757239] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:16:41.820 [2024-12-06 18:13:53.757489] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.820 [2024-12-06 18:13:53.938326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.079 [2024-12-06 18:13:54.077464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.338 [2024-12-06 18:13:54.318552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.338 [2024-12-06 18:13:54.318710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.611 [2024-12-06 18:13:54.688504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.611 [2024-12-06 18:13:54.688583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.611 [2024-12-06 18:13:54.688601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.611 [2024-12-06 18:13:54.688617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.611 [2024-12-06 18:13:54.688627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.611 [2024-12-06 18:13:54.688641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.611 [2024-12-06 18:13:54.688651] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.611 [2024-12-06 18:13:54.688665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.611 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.612 "name": "Existed_Raid", 00:16:42.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.612 "strip_size_kb": 64, 00:16:42.612 "state": "configuring", 00:16:42.612 "raid_level": "raid5f", 00:16:42.612 "superblock": false, 00:16:42.612 "num_base_bdevs": 4, 00:16:42.612 "num_base_bdevs_discovered": 0, 00:16:42.612 "num_base_bdevs_operational": 4, 00:16:42.612 "base_bdevs_list": [ 00:16:42.612 { 00:16:42.612 "name": "BaseBdev1", 00:16:42.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.612 "is_configured": false, 00:16:42.612 "data_offset": 0, 00:16:42.612 "data_size": 0 00:16:42.612 }, 00:16:42.612 { 00:16:42.612 "name": "BaseBdev2", 00:16:42.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.612 "is_configured": false, 00:16:42.612 "data_offset": 0, 00:16:42.612 "data_size": 0 00:16:42.612 }, 00:16:42.612 { 00:16:42.612 "name": "BaseBdev3", 00:16:42.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.612 "is_configured": false, 00:16:42.612 "data_offset": 0, 00:16:42.612 "data_size": 0 00:16:42.612 }, 00:16:42.612 { 00:16:42.612 "name": "BaseBdev4", 00:16:42.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.612 "is_configured": false, 00:16:42.612 "data_offset": 0, 00:16:42.612 "data_size": 0 00:16:42.612 } 00:16:42.612 ] 00:16:42.612 }' 00:16:42.612 18:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.612 18:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.181 [2024-12-06 18:13:55.147802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:43.181 [2024-12-06 18:13:55.147919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.181 [2024-12-06 18:13:55.159823] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.181 [2024-12-06 18:13:55.159956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.181 [2024-12-06 18:13:55.159990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.181 [2024-12-06 18:13:55.160018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.181 [2024-12-06 18:13:55.160041] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:43.181 [2024-12-06 18:13:55.160078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:43.181 [2024-12-06 18:13:55.160113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:43.181 [2024-12-06 18:13:55.160141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.181 [2024-12-06 18:13:55.214694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.181 BaseBdev1 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.181 [ 00:16:43.181 { 00:16:43.181 "name": "BaseBdev1", 00:16:43.181 "aliases": [ 00:16:43.181 "c6f08772-f972-44ea-be24-f558371fb57e" 00:16:43.181 ], 00:16:43.181 "product_name": "Malloc disk", 00:16:43.181 "block_size": 512, 00:16:43.181 "num_blocks": 65536, 00:16:43.181 "uuid": "c6f08772-f972-44ea-be24-f558371fb57e", 00:16:43.181 "assigned_rate_limits": { 00:16:43.181 "rw_ios_per_sec": 0, 00:16:43.181 "rw_mbytes_per_sec": 0, 00:16:43.181 "r_mbytes_per_sec": 0, 00:16:43.181 "w_mbytes_per_sec": 0 00:16:43.181 }, 00:16:43.181 "claimed": true, 00:16:43.181 "claim_type": "exclusive_write", 00:16:43.181 "zoned": false, 00:16:43.181 "supported_io_types": { 00:16:43.181 "read": true, 00:16:43.181 "write": true, 00:16:43.181 "unmap": true, 00:16:43.181 "flush": true, 00:16:43.181 "reset": true, 00:16:43.181 "nvme_admin": false, 00:16:43.181 "nvme_io": false, 00:16:43.181 "nvme_io_md": false, 00:16:43.181 "write_zeroes": true, 00:16:43.181 "zcopy": true, 00:16:43.181 "get_zone_info": false, 00:16:43.181 "zone_management": false, 00:16:43.181 "zone_append": false, 00:16:43.181 "compare": false, 00:16:43.181 "compare_and_write": false, 00:16:43.181 "abort": true, 00:16:43.181 "seek_hole": false, 00:16:43.181 "seek_data": false, 00:16:43.181 "copy": true, 00:16:43.181 "nvme_iov_md": false 00:16:43.181 }, 00:16:43.181 "memory_domains": [ 00:16:43.181 { 00:16:43.181 "dma_device_id": "system", 00:16:43.181 "dma_device_type": 1 00:16:43.181 }, 00:16:43.181 { 00:16:43.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.181 "dma_device_type": 2 00:16:43.181 } 00:16:43.181 ], 00:16:43.181 "driver_specific": {} 00:16:43.181 } 00:16:43.181 ] 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.181 "name": "Existed_Raid", 00:16:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.181 "strip_size_kb": 64, 00:16:43.181 "state": "configuring", 00:16:43.181 "raid_level": "raid5f", 00:16:43.181 "superblock": false, 00:16:43.181 "num_base_bdevs": 4, 00:16:43.181 "num_base_bdevs_discovered": 1, 00:16:43.181 "num_base_bdevs_operational": 4, 00:16:43.181 "base_bdevs_list": [ 00:16:43.181 { 00:16:43.181 "name": "BaseBdev1", 00:16:43.181 "uuid": "c6f08772-f972-44ea-be24-f558371fb57e", 00:16:43.181 "is_configured": true, 00:16:43.181 "data_offset": 0, 00:16:43.181 "data_size": 65536 00:16:43.181 }, 00:16:43.181 { 00:16:43.181 "name": "BaseBdev2", 00:16:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.181 "is_configured": false, 00:16:43.181 "data_offset": 0, 00:16:43.181 "data_size": 0 00:16:43.181 }, 00:16:43.181 { 00:16:43.181 "name": "BaseBdev3", 00:16:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.181 "is_configured": false, 00:16:43.181 "data_offset": 0, 00:16:43.181 "data_size": 0 00:16:43.181 }, 00:16:43.181 { 00:16:43.181 "name": "BaseBdev4", 00:16:43.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.181 "is_configured": false, 00:16:43.181 "data_offset": 0, 00:16:43.181 "data_size": 0 00:16:43.181 } 00:16:43.181 ] 00:16:43.181 }' 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.181 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.750 [2024-12-06 18:13:55.721986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:43.750 [2024-12-06 18:13:55.722175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.750 [2024-12-06 18:13:55.734058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.750 [2024-12-06 18:13:55.736415] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.750 [2024-12-06 18:13:55.736558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.750 [2024-12-06 18:13:55.736598] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:43.750 [2024-12-06 18:13:55.736638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:43.750 [2024-12-06 18:13:55.736670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:43.750 [2024-12-06 18:13:55.736698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.750 "name": "Existed_Raid", 00:16:43.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.750 "strip_size_kb": 64, 00:16:43.750 "state": "configuring", 00:16:43.750 "raid_level": "raid5f", 00:16:43.750 "superblock": false, 00:16:43.750 "num_base_bdevs": 4, 00:16:43.750 "num_base_bdevs_discovered": 1, 00:16:43.750 "num_base_bdevs_operational": 4, 00:16:43.750 "base_bdevs_list": [ 00:16:43.750 { 00:16:43.750 "name": "BaseBdev1", 00:16:43.750 "uuid": "c6f08772-f972-44ea-be24-f558371fb57e", 00:16:43.750 "is_configured": true, 00:16:43.750 "data_offset": 0, 00:16:43.750 "data_size": 65536 00:16:43.750 }, 00:16:43.750 { 00:16:43.750 "name": "BaseBdev2", 00:16:43.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.750 "is_configured": false, 00:16:43.750 "data_offset": 0, 00:16:43.750 "data_size": 0 00:16:43.750 }, 00:16:43.750 { 00:16:43.750 "name": "BaseBdev3", 00:16:43.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.750 "is_configured": false, 00:16:43.750 "data_offset": 0, 00:16:43.750 "data_size": 0 00:16:43.750 }, 00:16:43.750 { 00:16:43.750 "name": "BaseBdev4", 00:16:43.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.750 "is_configured": false, 00:16:43.750 "data_offset": 0, 00:16:43.750 "data_size": 0 00:16:43.750 } 00:16:43.750 ] 00:16:43.750 }' 00:16:43.750 18:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.751 18:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.319 [2024-12-06 18:13:56.238547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.319 BaseBdev2 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.319 [ 00:16:44.319 { 00:16:44.319 "name": "BaseBdev2", 00:16:44.319 "aliases": [ 00:16:44.319 "bde390ab-5ae4-45f1-99db-64fb7f4d43d9" 00:16:44.319 ], 00:16:44.319 "product_name": "Malloc disk", 00:16:44.319 "block_size": 512, 00:16:44.319 "num_blocks": 65536, 00:16:44.319 "uuid": "bde390ab-5ae4-45f1-99db-64fb7f4d43d9", 00:16:44.319 "assigned_rate_limits": { 00:16:44.319 "rw_ios_per_sec": 0, 00:16:44.319 "rw_mbytes_per_sec": 0, 00:16:44.319 "r_mbytes_per_sec": 0, 00:16:44.319 "w_mbytes_per_sec": 0 00:16:44.319 }, 00:16:44.319 "claimed": true, 00:16:44.319 "claim_type": "exclusive_write", 00:16:44.319 "zoned": false, 00:16:44.319 "supported_io_types": { 00:16:44.319 "read": true, 00:16:44.319 "write": true, 00:16:44.319 "unmap": true, 00:16:44.319 "flush": true, 00:16:44.319 "reset": true, 00:16:44.319 "nvme_admin": false, 00:16:44.319 "nvme_io": false, 00:16:44.319 "nvme_io_md": false, 00:16:44.319 "write_zeroes": true, 00:16:44.319 "zcopy": true, 00:16:44.319 "get_zone_info": false, 00:16:44.319 "zone_management": false, 00:16:44.319 "zone_append": false, 00:16:44.319 "compare": false, 00:16:44.319 "compare_and_write": false, 00:16:44.319 "abort": true, 00:16:44.319 "seek_hole": false, 00:16:44.319 "seek_data": false, 00:16:44.319 "copy": true, 00:16:44.319 "nvme_iov_md": false 00:16:44.319 }, 00:16:44.319 "memory_domains": [ 00:16:44.319 { 00:16:44.319 "dma_device_id": "system", 00:16:44.319 "dma_device_type": 1 00:16:44.319 }, 00:16:44.319 { 00:16:44.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.319 "dma_device_type": 2 00:16:44.319 } 00:16:44.319 ], 00:16:44.319 "driver_specific": {} 00:16:44.319 } 00:16:44.319 ] 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.319 "name": "Existed_Raid", 00:16:44.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.319 "strip_size_kb": 64, 00:16:44.319 "state": "configuring", 00:16:44.319 "raid_level": "raid5f", 00:16:44.319 "superblock": false, 00:16:44.319 "num_base_bdevs": 4, 00:16:44.319 "num_base_bdevs_discovered": 2, 00:16:44.319 "num_base_bdevs_operational": 4, 00:16:44.319 "base_bdevs_list": [ 00:16:44.319 { 00:16:44.319 "name": "BaseBdev1", 00:16:44.319 "uuid": "c6f08772-f972-44ea-be24-f558371fb57e", 00:16:44.319 "is_configured": true, 00:16:44.319 "data_offset": 0, 00:16:44.319 "data_size": 65536 00:16:44.319 }, 00:16:44.319 { 00:16:44.319 "name": "BaseBdev2", 00:16:44.319 "uuid": "bde390ab-5ae4-45f1-99db-64fb7f4d43d9", 00:16:44.319 "is_configured": true, 00:16:44.319 "data_offset": 0, 00:16:44.319 "data_size": 65536 00:16:44.319 }, 00:16:44.319 { 00:16:44.319 "name": "BaseBdev3", 00:16:44.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.319 "is_configured": false, 00:16:44.319 "data_offset": 0, 00:16:44.319 "data_size": 0 00:16:44.319 }, 00:16:44.319 { 00:16:44.319 "name": "BaseBdev4", 00:16:44.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.319 "is_configured": false, 00:16:44.319 "data_offset": 0, 00:16:44.319 "data_size": 0 00:16:44.319 } 00:16:44.319 ] 00:16:44.319 }' 00:16:44.319 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.320 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.888 [2024-12-06 18:13:56.821883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.888 BaseBdev3 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.888 [ 00:16:44.888 { 00:16:44.888 "name": "BaseBdev3", 00:16:44.888 "aliases": [ 00:16:44.888 "1093c1a1-690f-463d-9b26-ce1f0fb76daf" 00:16:44.888 ], 00:16:44.888 "product_name": "Malloc disk", 00:16:44.888 "block_size": 512, 00:16:44.888 "num_blocks": 65536, 00:16:44.888 "uuid": "1093c1a1-690f-463d-9b26-ce1f0fb76daf", 00:16:44.888 "assigned_rate_limits": { 00:16:44.888 "rw_ios_per_sec": 0, 00:16:44.888 "rw_mbytes_per_sec": 0, 00:16:44.888 "r_mbytes_per_sec": 0, 00:16:44.888 "w_mbytes_per_sec": 0 00:16:44.888 }, 00:16:44.888 "claimed": true, 00:16:44.888 "claim_type": "exclusive_write", 00:16:44.888 "zoned": false, 00:16:44.888 "supported_io_types": { 00:16:44.888 "read": true, 00:16:44.888 "write": true, 00:16:44.888 "unmap": true, 00:16:44.888 "flush": true, 00:16:44.888 "reset": true, 00:16:44.888 "nvme_admin": false, 00:16:44.888 "nvme_io": false, 00:16:44.888 "nvme_io_md": false, 00:16:44.888 "write_zeroes": true, 00:16:44.888 "zcopy": true, 00:16:44.888 "get_zone_info": false, 00:16:44.888 "zone_management": false, 00:16:44.888 "zone_append": false, 00:16:44.888 "compare": false, 00:16:44.888 "compare_and_write": false, 00:16:44.888 "abort": true, 00:16:44.888 "seek_hole": false, 00:16:44.888 "seek_data": false, 00:16:44.888 "copy": true, 00:16:44.888 "nvme_iov_md": false 00:16:44.888 }, 00:16:44.888 "memory_domains": [ 00:16:44.888 { 00:16:44.888 "dma_device_id": "system", 00:16:44.888 "dma_device_type": 1 00:16:44.888 }, 00:16:44.888 { 00:16:44.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.888 "dma_device_type": 2 00:16:44.888 } 00:16:44.888 ], 00:16:44.888 "driver_specific": {} 00:16:44.888 } 00:16:44.888 ] 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:44.888 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.889 "name": "Existed_Raid", 00:16:44.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.889 "strip_size_kb": 64, 00:16:44.889 "state": "configuring", 00:16:44.889 "raid_level": "raid5f", 00:16:44.889 "superblock": false, 00:16:44.889 "num_base_bdevs": 4, 00:16:44.889 "num_base_bdevs_discovered": 3, 00:16:44.889 "num_base_bdevs_operational": 4, 00:16:44.889 "base_bdevs_list": [ 00:16:44.889 { 00:16:44.889 "name": "BaseBdev1", 00:16:44.889 "uuid": "c6f08772-f972-44ea-be24-f558371fb57e", 00:16:44.889 "is_configured": true, 00:16:44.889 "data_offset": 0, 00:16:44.889 "data_size": 65536 00:16:44.889 }, 00:16:44.889 { 00:16:44.889 "name": "BaseBdev2", 00:16:44.889 "uuid": "bde390ab-5ae4-45f1-99db-64fb7f4d43d9", 00:16:44.889 "is_configured": true, 00:16:44.889 "data_offset": 0, 00:16:44.889 "data_size": 65536 00:16:44.889 }, 00:16:44.889 { 00:16:44.889 "name": "BaseBdev3", 00:16:44.889 "uuid": "1093c1a1-690f-463d-9b26-ce1f0fb76daf", 00:16:44.889 "is_configured": true, 00:16:44.889 "data_offset": 0, 00:16:44.889 "data_size": 65536 00:16:44.889 }, 00:16:44.889 { 00:16:44.889 "name": "BaseBdev4", 00:16:44.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.889 "is_configured": false, 00:16:44.889 "data_offset": 0, 00:16:44.889 "data_size": 0 00:16:44.889 } 00:16:44.889 ] 00:16:44.889 }' 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.889 18:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.147 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:45.147 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.147 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.406 [2024-12-06 18:13:57.337627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:45.406 [2024-12-06 18:13:57.337723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:45.406 [2024-12-06 18:13:57.337734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:45.406 [2024-12-06 18:13:57.338031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:45.406 [2024-12-06 18:13:57.347477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:45.406 [2024-12-06 18:13:57.347633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:45.406 [2024-12-06 18:13:57.348092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.406 BaseBdev4 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.406 [ 00:16:45.406 { 00:16:45.406 "name": "BaseBdev4", 00:16:45.406 "aliases": [ 00:16:45.406 "706c92d0-a6f0-4be1-9c32-ba4d8a678c73" 00:16:45.406 ], 00:16:45.406 "product_name": "Malloc disk", 00:16:45.406 "block_size": 512, 00:16:45.406 "num_blocks": 65536, 00:16:45.406 "uuid": "706c92d0-a6f0-4be1-9c32-ba4d8a678c73", 00:16:45.406 "assigned_rate_limits": { 00:16:45.406 "rw_ios_per_sec": 0, 00:16:45.406 "rw_mbytes_per_sec": 0, 00:16:45.406 "r_mbytes_per_sec": 0, 00:16:45.406 "w_mbytes_per_sec": 0 00:16:45.406 }, 00:16:45.406 "claimed": true, 00:16:45.406 "claim_type": "exclusive_write", 00:16:45.406 "zoned": false, 00:16:45.406 "supported_io_types": { 00:16:45.406 "read": true, 00:16:45.406 "write": true, 00:16:45.406 "unmap": true, 00:16:45.406 "flush": true, 00:16:45.406 "reset": true, 00:16:45.406 "nvme_admin": false, 00:16:45.406 "nvme_io": false, 00:16:45.406 "nvme_io_md": false, 00:16:45.406 "write_zeroes": true, 00:16:45.406 "zcopy": true, 00:16:45.406 "get_zone_info": false, 00:16:45.406 "zone_management": false, 00:16:45.406 "zone_append": false, 00:16:45.406 "compare": false, 00:16:45.406 "compare_and_write": false, 00:16:45.406 "abort": true, 00:16:45.406 "seek_hole": false, 00:16:45.406 "seek_data": false, 00:16:45.406 "copy": true, 00:16:45.406 "nvme_iov_md": false 00:16:45.406 }, 00:16:45.406 "memory_domains": [ 00:16:45.406 { 00:16:45.406 "dma_device_id": "system", 00:16:45.406 "dma_device_type": 1 00:16:45.406 }, 00:16:45.406 { 00:16:45.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.406 "dma_device_type": 2 00:16:45.406 } 00:16:45.406 ], 00:16:45.406 "driver_specific": {} 00:16:45.406 } 00:16:45.406 ] 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.406 "name": "Existed_Raid", 00:16:45.406 "uuid": "22c327ca-8e11-46b2-a24c-a81d8258742e", 00:16:45.406 "strip_size_kb": 64, 00:16:45.406 "state": "online", 00:16:45.406 "raid_level": "raid5f", 00:16:45.406 "superblock": false, 00:16:45.406 "num_base_bdevs": 4, 00:16:45.406 "num_base_bdevs_discovered": 4, 00:16:45.406 "num_base_bdevs_operational": 4, 00:16:45.406 "base_bdevs_list": [ 00:16:45.406 { 00:16:45.406 "name": "BaseBdev1", 00:16:45.406 "uuid": "c6f08772-f972-44ea-be24-f558371fb57e", 00:16:45.406 "is_configured": true, 00:16:45.406 "data_offset": 0, 00:16:45.406 "data_size": 65536 00:16:45.406 }, 00:16:45.406 { 00:16:45.406 "name": "BaseBdev2", 00:16:45.406 "uuid": "bde390ab-5ae4-45f1-99db-64fb7f4d43d9", 00:16:45.406 "is_configured": true, 00:16:45.406 "data_offset": 0, 00:16:45.406 "data_size": 65536 00:16:45.406 }, 00:16:45.406 { 00:16:45.406 "name": "BaseBdev3", 00:16:45.406 "uuid": "1093c1a1-690f-463d-9b26-ce1f0fb76daf", 00:16:45.406 "is_configured": true, 00:16:45.406 "data_offset": 0, 00:16:45.406 "data_size": 65536 00:16:45.406 }, 00:16:45.406 { 00:16:45.406 "name": "BaseBdev4", 00:16:45.406 "uuid": "706c92d0-a6f0-4be1-9c32-ba4d8a678c73", 00:16:45.406 "is_configured": true, 00:16:45.406 "data_offset": 0, 00:16:45.406 "data_size": 65536 00:16:45.406 } 00:16:45.406 ] 00:16:45.406 }' 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.406 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:45.975 [2024-12-06 18:13:57.877527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:45.975 "name": "Existed_Raid", 00:16:45.975 "aliases": [ 00:16:45.975 "22c327ca-8e11-46b2-a24c-a81d8258742e" 00:16:45.975 ], 00:16:45.975 "product_name": "Raid Volume", 00:16:45.975 "block_size": 512, 00:16:45.975 "num_blocks": 196608, 00:16:45.975 "uuid": "22c327ca-8e11-46b2-a24c-a81d8258742e", 00:16:45.975 "assigned_rate_limits": { 00:16:45.975 "rw_ios_per_sec": 0, 00:16:45.975 "rw_mbytes_per_sec": 0, 00:16:45.975 "r_mbytes_per_sec": 0, 00:16:45.975 "w_mbytes_per_sec": 0 00:16:45.975 }, 00:16:45.975 "claimed": false, 00:16:45.975 "zoned": false, 00:16:45.975 "supported_io_types": { 00:16:45.975 "read": true, 00:16:45.975 "write": true, 00:16:45.975 "unmap": false, 00:16:45.975 "flush": false, 00:16:45.975 "reset": true, 00:16:45.975 "nvme_admin": false, 00:16:45.975 "nvme_io": false, 00:16:45.975 "nvme_io_md": false, 00:16:45.975 "write_zeroes": true, 00:16:45.975 "zcopy": false, 00:16:45.975 "get_zone_info": false, 00:16:45.975 "zone_management": false, 00:16:45.975 "zone_append": false, 00:16:45.975 "compare": false, 00:16:45.975 "compare_and_write": false, 00:16:45.975 "abort": false, 00:16:45.975 "seek_hole": false, 00:16:45.975 "seek_data": false, 00:16:45.975 "copy": false, 00:16:45.975 "nvme_iov_md": false 00:16:45.975 }, 00:16:45.975 "driver_specific": { 00:16:45.975 "raid": { 00:16:45.975 "uuid": "22c327ca-8e11-46b2-a24c-a81d8258742e", 00:16:45.975 "strip_size_kb": 64, 00:16:45.975 "state": "online", 00:16:45.975 "raid_level": "raid5f", 00:16:45.975 "superblock": false, 00:16:45.975 "num_base_bdevs": 4, 00:16:45.975 "num_base_bdevs_discovered": 4, 00:16:45.975 "num_base_bdevs_operational": 4, 00:16:45.975 "base_bdevs_list": [ 00:16:45.975 { 00:16:45.975 "name": "BaseBdev1", 00:16:45.975 "uuid": "c6f08772-f972-44ea-be24-f558371fb57e", 00:16:45.975 "is_configured": true, 00:16:45.975 "data_offset": 0, 00:16:45.975 "data_size": 65536 00:16:45.975 }, 00:16:45.975 { 00:16:45.975 "name": "BaseBdev2", 00:16:45.975 "uuid": "bde390ab-5ae4-45f1-99db-64fb7f4d43d9", 00:16:45.975 "is_configured": true, 00:16:45.975 "data_offset": 0, 00:16:45.975 "data_size": 65536 00:16:45.975 }, 00:16:45.975 { 00:16:45.975 "name": "BaseBdev3", 00:16:45.975 "uuid": "1093c1a1-690f-463d-9b26-ce1f0fb76daf", 00:16:45.975 "is_configured": true, 00:16:45.975 "data_offset": 0, 00:16:45.975 "data_size": 65536 00:16:45.975 }, 00:16:45.975 { 00:16:45.975 "name": "BaseBdev4", 00:16:45.975 "uuid": "706c92d0-a6f0-4be1-9c32-ba4d8a678c73", 00:16:45.975 "is_configured": true, 00:16:45.975 "data_offset": 0, 00:16:45.975 "data_size": 65536 00:16:45.975 } 00:16:45.975 ] 00:16:45.975 } 00:16:45.975 } 00:16:45.975 }' 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:45.975 BaseBdev2 00:16:45.975 BaseBdev3 00:16:45.975 BaseBdev4' 00:16:45.975 18:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.975 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.235 [2024-12-06 18:13:58.204827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.235 "name": "Existed_Raid", 00:16:46.235 "uuid": "22c327ca-8e11-46b2-a24c-a81d8258742e", 00:16:46.235 "strip_size_kb": 64, 00:16:46.235 "state": "online", 00:16:46.235 "raid_level": "raid5f", 00:16:46.235 "superblock": false, 00:16:46.235 "num_base_bdevs": 4, 00:16:46.235 "num_base_bdevs_discovered": 3, 00:16:46.235 "num_base_bdevs_operational": 3, 00:16:46.235 "base_bdevs_list": [ 00:16:46.235 { 00:16:46.235 "name": null, 00:16:46.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.235 "is_configured": false, 00:16:46.235 "data_offset": 0, 00:16:46.235 "data_size": 65536 00:16:46.235 }, 00:16:46.235 { 00:16:46.235 "name": "BaseBdev2", 00:16:46.235 "uuid": "bde390ab-5ae4-45f1-99db-64fb7f4d43d9", 00:16:46.235 "is_configured": true, 00:16:46.235 "data_offset": 0, 00:16:46.235 "data_size": 65536 00:16:46.235 }, 00:16:46.235 { 00:16:46.235 "name": "BaseBdev3", 00:16:46.235 "uuid": "1093c1a1-690f-463d-9b26-ce1f0fb76daf", 00:16:46.235 "is_configured": true, 00:16:46.235 "data_offset": 0, 00:16:46.235 "data_size": 65536 00:16:46.235 }, 00:16:46.235 { 00:16:46.235 "name": "BaseBdev4", 00:16:46.235 "uuid": "706c92d0-a6f0-4be1-9c32-ba4d8a678c73", 00:16:46.235 "is_configured": true, 00:16:46.235 "data_offset": 0, 00:16:46.235 "data_size": 65536 00:16:46.235 } 00:16:46.235 ] 00:16:46.235 }' 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.235 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.803 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.804 [2024-12-06 18:13:58.810674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:46.804 [2024-12-06 18:13:58.810900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.804 [2024-12-06 18:13:58.920573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.804 18:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.064 [2024-12-06 18:13:58.972534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.064 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.064 [2024-12-06 18:13:59.138171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:47.064 [2024-12-06 18:13:59.138332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.325 BaseBdev2 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.325 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.325 [ 00:16:47.325 { 00:16:47.325 "name": "BaseBdev2", 00:16:47.325 "aliases": [ 00:16:47.325 "35749de2-a398-4ac6-885f-4cf7c18bbe79" 00:16:47.325 ], 00:16:47.325 "product_name": "Malloc disk", 00:16:47.325 "block_size": 512, 00:16:47.325 "num_blocks": 65536, 00:16:47.325 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:47.325 "assigned_rate_limits": { 00:16:47.325 "rw_ios_per_sec": 0, 00:16:47.325 "rw_mbytes_per_sec": 0, 00:16:47.325 "r_mbytes_per_sec": 0, 00:16:47.325 "w_mbytes_per_sec": 0 00:16:47.325 }, 00:16:47.325 "claimed": false, 00:16:47.325 "zoned": false, 00:16:47.325 "supported_io_types": { 00:16:47.326 "read": true, 00:16:47.326 "write": true, 00:16:47.326 "unmap": true, 00:16:47.326 "flush": true, 00:16:47.326 "reset": true, 00:16:47.326 "nvme_admin": false, 00:16:47.326 "nvme_io": false, 00:16:47.326 "nvme_io_md": false, 00:16:47.326 "write_zeroes": true, 00:16:47.326 "zcopy": true, 00:16:47.326 "get_zone_info": false, 00:16:47.326 "zone_management": false, 00:16:47.326 "zone_append": false, 00:16:47.326 "compare": false, 00:16:47.326 "compare_and_write": false, 00:16:47.326 "abort": true, 00:16:47.326 "seek_hole": false, 00:16:47.326 "seek_data": false, 00:16:47.326 "copy": true, 00:16:47.326 "nvme_iov_md": false 00:16:47.326 }, 00:16:47.326 "memory_domains": [ 00:16:47.326 { 00:16:47.326 "dma_device_id": "system", 00:16:47.326 "dma_device_type": 1 00:16:47.326 }, 00:16:47.326 { 00:16:47.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.326 "dma_device_type": 2 00:16:47.326 } 00:16:47.326 ], 00:16:47.326 "driver_specific": {} 00:16:47.326 } 00:16:47.326 ] 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.326 BaseBdev3 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.326 [ 00:16:47.326 { 00:16:47.326 "name": "BaseBdev3", 00:16:47.326 "aliases": [ 00:16:47.326 "9b256565-6c35-4f25-ac99-06336ad859c0" 00:16:47.326 ], 00:16:47.326 "product_name": "Malloc disk", 00:16:47.326 "block_size": 512, 00:16:47.326 "num_blocks": 65536, 00:16:47.326 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:47.326 "assigned_rate_limits": { 00:16:47.326 "rw_ios_per_sec": 0, 00:16:47.326 "rw_mbytes_per_sec": 0, 00:16:47.326 "r_mbytes_per_sec": 0, 00:16:47.326 "w_mbytes_per_sec": 0 00:16:47.326 }, 00:16:47.326 "claimed": false, 00:16:47.326 "zoned": false, 00:16:47.326 "supported_io_types": { 00:16:47.326 "read": true, 00:16:47.326 "write": true, 00:16:47.326 "unmap": true, 00:16:47.326 "flush": true, 00:16:47.326 "reset": true, 00:16:47.326 "nvme_admin": false, 00:16:47.326 "nvme_io": false, 00:16:47.326 "nvme_io_md": false, 00:16:47.326 "write_zeroes": true, 00:16:47.326 "zcopy": true, 00:16:47.326 "get_zone_info": false, 00:16:47.326 "zone_management": false, 00:16:47.326 "zone_append": false, 00:16:47.326 "compare": false, 00:16:47.326 "compare_and_write": false, 00:16:47.326 "abort": true, 00:16:47.326 "seek_hole": false, 00:16:47.326 "seek_data": false, 00:16:47.326 "copy": true, 00:16:47.326 "nvme_iov_md": false 00:16:47.326 }, 00:16:47.326 "memory_domains": [ 00:16:47.326 { 00:16:47.326 "dma_device_id": "system", 00:16:47.326 "dma_device_type": 1 00:16:47.326 }, 00:16:47.326 { 00:16:47.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.326 "dma_device_type": 2 00:16:47.326 } 00:16:47.326 ], 00:16:47.326 "driver_specific": {} 00:16:47.326 } 00:16:47.326 ] 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.326 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 BaseBdev4 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 [ 00:16:47.586 { 00:16:47.586 "name": "BaseBdev4", 00:16:47.586 "aliases": [ 00:16:47.586 "20961e74-0c4c-4df8-802a-fa55a1228f4b" 00:16:47.586 ], 00:16:47.586 "product_name": "Malloc disk", 00:16:47.586 "block_size": 512, 00:16:47.586 "num_blocks": 65536, 00:16:47.586 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:47.586 "assigned_rate_limits": { 00:16:47.586 "rw_ios_per_sec": 0, 00:16:47.586 "rw_mbytes_per_sec": 0, 00:16:47.586 "r_mbytes_per_sec": 0, 00:16:47.586 "w_mbytes_per_sec": 0 00:16:47.586 }, 00:16:47.586 "claimed": false, 00:16:47.586 "zoned": false, 00:16:47.586 "supported_io_types": { 00:16:47.586 "read": true, 00:16:47.586 "write": true, 00:16:47.586 "unmap": true, 00:16:47.586 "flush": true, 00:16:47.586 "reset": true, 00:16:47.586 "nvme_admin": false, 00:16:47.586 "nvme_io": false, 00:16:47.586 "nvme_io_md": false, 00:16:47.586 "write_zeroes": true, 00:16:47.586 "zcopy": true, 00:16:47.586 "get_zone_info": false, 00:16:47.586 "zone_management": false, 00:16:47.586 "zone_append": false, 00:16:47.586 "compare": false, 00:16:47.586 "compare_and_write": false, 00:16:47.586 "abort": true, 00:16:47.586 "seek_hole": false, 00:16:47.586 "seek_data": false, 00:16:47.586 "copy": true, 00:16:47.586 "nvme_iov_md": false 00:16:47.586 }, 00:16:47.586 "memory_domains": [ 00:16:47.586 { 00:16:47.586 "dma_device_id": "system", 00:16:47.586 "dma_device_type": 1 00:16:47.586 }, 00:16:47.586 { 00:16:47.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.586 "dma_device_type": 2 00:16:47.586 } 00:16:47.586 ], 00:16:47.586 "driver_specific": {} 00:16:47.586 } 00:16:47.586 ] 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.586 [2024-12-06 18:13:59.572827] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.586 [2024-12-06 18:13:59.572988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.586 [2024-12-06 18:13:59.573082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.586 [2024-12-06 18:13:59.575251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.586 [2024-12-06 18:13:59.575368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.586 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.587 "name": "Existed_Raid", 00:16:47.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.587 "strip_size_kb": 64, 00:16:47.587 "state": "configuring", 00:16:47.587 "raid_level": "raid5f", 00:16:47.587 "superblock": false, 00:16:47.587 "num_base_bdevs": 4, 00:16:47.587 "num_base_bdevs_discovered": 3, 00:16:47.587 "num_base_bdevs_operational": 4, 00:16:47.587 "base_bdevs_list": [ 00:16:47.587 { 00:16:47.587 "name": "BaseBdev1", 00:16:47.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.587 "is_configured": false, 00:16:47.587 "data_offset": 0, 00:16:47.587 "data_size": 0 00:16:47.587 }, 00:16:47.587 { 00:16:47.587 "name": "BaseBdev2", 00:16:47.587 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:47.587 "is_configured": true, 00:16:47.587 "data_offset": 0, 00:16:47.587 "data_size": 65536 00:16:47.587 }, 00:16:47.587 { 00:16:47.587 "name": "BaseBdev3", 00:16:47.587 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:47.587 "is_configured": true, 00:16:47.587 "data_offset": 0, 00:16:47.587 "data_size": 65536 00:16:47.587 }, 00:16:47.587 { 00:16:47.587 "name": "BaseBdev4", 00:16:47.587 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:47.587 "is_configured": true, 00:16:47.587 "data_offset": 0, 00:16:47.587 "data_size": 65536 00:16:47.587 } 00:16:47.587 ] 00:16:47.587 }' 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.587 18:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.158 [2024-12-06 18:14:00.032048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.158 "name": "Existed_Raid", 00:16:48.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.158 "strip_size_kb": 64, 00:16:48.158 "state": "configuring", 00:16:48.158 "raid_level": "raid5f", 00:16:48.158 "superblock": false, 00:16:48.158 "num_base_bdevs": 4, 00:16:48.158 "num_base_bdevs_discovered": 2, 00:16:48.158 "num_base_bdevs_operational": 4, 00:16:48.158 "base_bdevs_list": [ 00:16:48.158 { 00:16:48.158 "name": "BaseBdev1", 00:16:48.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.158 "is_configured": false, 00:16:48.158 "data_offset": 0, 00:16:48.158 "data_size": 0 00:16:48.158 }, 00:16:48.158 { 00:16:48.158 "name": null, 00:16:48.158 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:48.158 "is_configured": false, 00:16:48.158 "data_offset": 0, 00:16:48.158 "data_size": 65536 00:16:48.158 }, 00:16:48.158 { 00:16:48.158 "name": "BaseBdev3", 00:16:48.158 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:48.158 "is_configured": true, 00:16:48.158 "data_offset": 0, 00:16:48.158 "data_size": 65536 00:16:48.158 }, 00:16:48.158 { 00:16:48.158 "name": "BaseBdev4", 00:16:48.158 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:48.158 "is_configured": true, 00:16:48.158 "data_offset": 0, 00:16:48.158 "data_size": 65536 00:16:48.158 } 00:16:48.158 ] 00:16:48.158 }' 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.158 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 [2024-12-06 18:14:00.561840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.418 BaseBdev1 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.418 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.677 [ 00:16:48.677 { 00:16:48.677 "name": "BaseBdev1", 00:16:48.677 "aliases": [ 00:16:48.677 "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6" 00:16:48.677 ], 00:16:48.677 "product_name": "Malloc disk", 00:16:48.677 "block_size": 512, 00:16:48.677 "num_blocks": 65536, 00:16:48.677 "uuid": "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6", 00:16:48.677 "assigned_rate_limits": { 00:16:48.677 "rw_ios_per_sec": 0, 00:16:48.677 "rw_mbytes_per_sec": 0, 00:16:48.677 "r_mbytes_per_sec": 0, 00:16:48.677 "w_mbytes_per_sec": 0 00:16:48.677 }, 00:16:48.677 "claimed": true, 00:16:48.677 "claim_type": "exclusive_write", 00:16:48.677 "zoned": false, 00:16:48.677 "supported_io_types": { 00:16:48.677 "read": true, 00:16:48.677 "write": true, 00:16:48.677 "unmap": true, 00:16:48.677 "flush": true, 00:16:48.677 "reset": true, 00:16:48.677 "nvme_admin": false, 00:16:48.677 "nvme_io": false, 00:16:48.677 "nvme_io_md": false, 00:16:48.677 "write_zeroes": true, 00:16:48.677 "zcopy": true, 00:16:48.677 "get_zone_info": false, 00:16:48.677 "zone_management": false, 00:16:48.677 "zone_append": false, 00:16:48.677 "compare": false, 00:16:48.677 "compare_and_write": false, 00:16:48.677 "abort": true, 00:16:48.677 "seek_hole": false, 00:16:48.677 "seek_data": false, 00:16:48.677 "copy": true, 00:16:48.677 "nvme_iov_md": false 00:16:48.677 }, 00:16:48.677 "memory_domains": [ 00:16:48.677 { 00:16:48.677 "dma_device_id": "system", 00:16:48.677 "dma_device_type": 1 00:16:48.677 }, 00:16:48.677 { 00:16:48.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.677 "dma_device_type": 2 00:16:48.677 } 00:16:48.677 ], 00:16:48.677 "driver_specific": {} 00:16:48.677 } 00:16:48.677 ] 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.677 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.677 "name": "Existed_Raid", 00:16:48.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.677 "strip_size_kb": 64, 00:16:48.677 "state": "configuring", 00:16:48.677 "raid_level": "raid5f", 00:16:48.677 "superblock": false, 00:16:48.677 "num_base_bdevs": 4, 00:16:48.677 "num_base_bdevs_discovered": 3, 00:16:48.677 "num_base_bdevs_operational": 4, 00:16:48.677 "base_bdevs_list": [ 00:16:48.677 { 00:16:48.677 "name": "BaseBdev1", 00:16:48.677 "uuid": "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6", 00:16:48.677 "is_configured": true, 00:16:48.677 "data_offset": 0, 00:16:48.677 "data_size": 65536 00:16:48.677 }, 00:16:48.677 { 00:16:48.677 "name": null, 00:16:48.677 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:48.677 "is_configured": false, 00:16:48.677 "data_offset": 0, 00:16:48.677 "data_size": 65536 00:16:48.677 }, 00:16:48.677 { 00:16:48.677 "name": "BaseBdev3", 00:16:48.677 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:48.677 "is_configured": true, 00:16:48.677 "data_offset": 0, 00:16:48.677 "data_size": 65536 00:16:48.677 }, 00:16:48.677 { 00:16:48.677 "name": "BaseBdev4", 00:16:48.677 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:48.677 "is_configured": true, 00:16:48.677 "data_offset": 0, 00:16:48.678 "data_size": 65536 00:16:48.678 } 00:16:48.678 ] 00:16:48.678 }' 00:16:48.678 18:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.678 18:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.943 [2024-12-06 18:14:01.101127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.943 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.202 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.202 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.202 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.202 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.202 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.202 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.202 "name": "Existed_Raid", 00:16:49.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.202 "strip_size_kb": 64, 00:16:49.202 "state": "configuring", 00:16:49.202 "raid_level": "raid5f", 00:16:49.202 "superblock": false, 00:16:49.202 "num_base_bdevs": 4, 00:16:49.202 "num_base_bdevs_discovered": 2, 00:16:49.202 "num_base_bdevs_operational": 4, 00:16:49.202 "base_bdevs_list": [ 00:16:49.202 { 00:16:49.202 "name": "BaseBdev1", 00:16:49.202 "uuid": "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6", 00:16:49.202 "is_configured": true, 00:16:49.202 "data_offset": 0, 00:16:49.202 "data_size": 65536 00:16:49.202 }, 00:16:49.202 { 00:16:49.202 "name": null, 00:16:49.202 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:49.202 "is_configured": false, 00:16:49.202 "data_offset": 0, 00:16:49.202 "data_size": 65536 00:16:49.202 }, 00:16:49.202 { 00:16:49.202 "name": null, 00:16:49.202 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:49.202 "is_configured": false, 00:16:49.202 "data_offset": 0, 00:16:49.202 "data_size": 65536 00:16:49.202 }, 00:16:49.202 { 00:16:49.202 "name": "BaseBdev4", 00:16:49.202 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:49.202 "is_configured": true, 00:16:49.202 "data_offset": 0, 00:16:49.202 "data_size": 65536 00:16:49.202 } 00:16:49.202 ] 00:16:49.202 }' 00:16:49.202 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.202 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.460 [2024-12-06 18:14:01.596263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.460 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.719 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.719 "name": "Existed_Raid", 00:16:49.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.719 "strip_size_kb": 64, 00:16:49.719 "state": "configuring", 00:16:49.719 "raid_level": "raid5f", 00:16:49.719 "superblock": false, 00:16:49.719 "num_base_bdevs": 4, 00:16:49.719 "num_base_bdevs_discovered": 3, 00:16:49.719 "num_base_bdevs_operational": 4, 00:16:49.719 "base_bdevs_list": [ 00:16:49.719 { 00:16:49.719 "name": "BaseBdev1", 00:16:49.719 "uuid": "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6", 00:16:49.719 "is_configured": true, 00:16:49.719 "data_offset": 0, 00:16:49.719 "data_size": 65536 00:16:49.719 }, 00:16:49.719 { 00:16:49.719 "name": null, 00:16:49.719 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:49.719 "is_configured": false, 00:16:49.719 "data_offset": 0, 00:16:49.719 "data_size": 65536 00:16:49.719 }, 00:16:49.719 { 00:16:49.719 "name": "BaseBdev3", 00:16:49.719 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:49.719 "is_configured": true, 00:16:49.719 "data_offset": 0, 00:16:49.719 "data_size": 65536 00:16:49.719 }, 00:16:49.719 { 00:16:49.719 "name": "BaseBdev4", 00:16:49.719 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:49.719 "is_configured": true, 00:16:49.719 "data_offset": 0, 00:16:49.719 "data_size": 65536 00:16:49.719 } 00:16:49.719 ] 00:16:49.719 }' 00:16:49.719 18:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.719 18:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.978 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:49.978 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.978 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.978 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.978 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.978 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:49.978 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:49.978 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.978 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.978 [2024-12-06 18:14:02.107477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.236 "name": "Existed_Raid", 00:16:50.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.236 "strip_size_kb": 64, 00:16:50.236 "state": "configuring", 00:16:50.236 "raid_level": "raid5f", 00:16:50.236 "superblock": false, 00:16:50.236 "num_base_bdevs": 4, 00:16:50.236 "num_base_bdevs_discovered": 2, 00:16:50.236 "num_base_bdevs_operational": 4, 00:16:50.236 "base_bdevs_list": [ 00:16:50.236 { 00:16:50.236 "name": null, 00:16:50.236 "uuid": "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6", 00:16:50.236 "is_configured": false, 00:16:50.236 "data_offset": 0, 00:16:50.236 "data_size": 65536 00:16:50.236 }, 00:16:50.236 { 00:16:50.236 "name": null, 00:16:50.236 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:50.236 "is_configured": false, 00:16:50.236 "data_offset": 0, 00:16:50.236 "data_size": 65536 00:16:50.236 }, 00:16:50.236 { 00:16:50.236 "name": "BaseBdev3", 00:16:50.236 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:50.236 "is_configured": true, 00:16:50.236 "data_offset": 0, 00:16:50.236 "data_size": 65536 00:16:50.236 }, 00:16:50.236 { 00:16:50.236 "name": "BaseBdev4", 00:16:50.236 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:50.236 "is_configured": true, 00:16:50.236 "data_offset": 0, 00:16:50.236 "data_size": 65536 00:16:50.236 } 00:16:50.236 ] 00:16:50.236 }' 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.236 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.495 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.495 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.495 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.495 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.753 [2024-12-06 18:14:02.707411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.753 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.754 "name": "Existed_Raid", 00:16:50.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.754 "strip_size_kb": 64, 00:16:50.754 "state": "configuring", 00:16:50.754 "raid_level": "raid5f", 00:16:50.754 "superblock": false, 00:16:50.754 "num_base_bdevs": 4, 00:16:50.754 "num_base_bdevs_discovered": 3, 00:16:50.754 "num_base_bdevs_operational": 4, 00:16:50.754 "base_bdevs_list": [ 00:16:50.754 { 00:16:50.754 "name": null, 00:16:50.754 "uuid": "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6", 00:16:50.754 "is_configured": false, 00:16:50.754 "data_offset": 0, 00:16:50.754 "data_size": 65536 00:16:50.754 }, 00:16:50.754 { 00:16:50.754 "name": "BaseBdev2", 00:16:50.754 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:50.754 "is_configured": true, 00:16:50.754 "data_offset": 0, 00:16:50.754 "data_size": 65536 00:16:50.754 }, 00:16:50.754 { 00:16:50.754 "name": "BaseBdev3", 00:16:50.754 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:50.754 "is_configured": true, 00:16:50.754 "data_offset": 0, 00:16:50.754 "data_size": 65536 00:16:50.754 }, 00:16:50.754 { 00:16:50.754 "name": "BaseBdev4", 00:16:50.754 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:50.754 "is_configured": true, 00:16:50.754 "data_offset": 0, 00:16:50.754 "data_size": 65536 00:16:50.754 } 00:16:50.754 ] 00:16:50.754 }' 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.754 18:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.012 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.012 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:51.012 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.012 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4e21b9ab-298e-4304-bfb3-2752a9e1c3c6 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.286 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.286 [2024-12-06 18:14:03.301017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:51.287 [2024-12-06 18:14:03.301253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:51.287 [2024-12-06 18:14:03.301271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:51.287 [2024-12-06 18:14:03.301599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:51.287 [2024-12-06 18:14:03.310494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:51.287 [2024-12-06 18:14:03.310619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:51.287 [2024-12-06 18:14:03.311013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.287 NewBaseBdev 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.287 [ 00:16:51.287 { 00:16:51.287 "name": "NewBaseBdev", 00:16:51.287 "aliases": [ 00:16:51.287 "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6" 00:16:51.287 ], 00:16:51.287 "product_name": "Malloc disk", 00:16:51.287 "block_size": 512, 00:16:51.287 "num_blocks": 65536, 00:16:51.287 "uuid": "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6", 00:16:51.287 "assigned_rate_limits": { 00:16:51.287 "rw_ios_per_sec": 0, 00:16:51.287 "rw_mbytes_per_sec": 0, 00:16:51.287 "r_mbytes_per_sec": 0, 00:16:51.287 "w_mbytes_per_sec": 0 00:16:51.287 }, 00:16:51.287 "claimed": true, 00:16:51.287 "claim_type": "exclusive_write", 00:16:51.287 "zoned": false, 00:16:51.287 "supported_io_types": { 00:16:51.287 "read": true, 00:16:51.287 "write": true, 00:16:51.287 "unmap": true, 00:16:51.287 "flush": true, 00:16:51.287 "reset": true, 00:16:51.287 "nvme_admin": false, 00:16:51.287 "nvme_io": false, 00:16:51.287 "nvme_io_md": false, 00:16:51.287 "write_zeroes": true, 00:16:51.287 "zcopy": true, 00:16:51.287 "get_zone_info": false, 00:16:51.287 "zone_management": false, 00:16:51.287 "zone_append": false, 00:16:51.287 "compare": false, 00:16:51.287 "compare_and_write": false, 00:16:51.287 "abort": true, 00:16:51.287 "seek_hole": false, 00:16:51.287 "seek_data": false, 00:16:51.287 "copy": true, 00:16:51.287 "nvme_iov_md": false 00:16:51.287 }, 00:16:51.287 "memory_domains": [ 00:16:51.287 { 00:16:51.287 "dma_device_id": "system", 00:16:51.287 "dma_device_type": 1 00:16:51.287 }, 00:16:51.287 { 00:16:51.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.287 "dma_device_type": 2 00:16:51.287 } 00:16:51.287 ], 00:16:51.287 "driver_specific": {} 00:16:51.287 } 00:16:51.287 ] 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.287 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.287 "name": "Existed_Raid", 00:16:51.287 "uuid": "1dd6f8f8-3317-46b3-88a4-723e8b2339a5", 00:16:51.287 "strip_size_kb": 64, 00:16:51.287 "state": "online", 00:16:51.287 "raid_level": "raid5f", 00:16:51.287 "superblock": false, 00:16:51.287 "num_base_bdevs": 4, 00:16:51.288 "num_base_bdevs_discovered": 4, 00:16:51.288 "num_base_bdevs_operational": 4, 00:16:51.288 "base_bdevs_list": [ 00:16:51.288 { 00:16:51.288 "name": "NewBaseBdev", 00:16:51.288 "uuid": "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6", 00:16:51.288 "is_configured": true, 00:16:51.288 "data_offset": 0, 00:16:51.288 "data_size": 65536 00:16:51.288 }, 00:16:51.288 { 00:16:51.288 "name": "BaseBdev2", 00:16:51.288 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:51.288 "is_configured": true, 00:16:51.288 "data_offset": 0, 00:16:51.288 "data_size": 65536 00:16:51.288 }, 00:16:51.288 { 00:16:51.288 "name": "BaseBdev3", 00:16:51.288 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:51.288 "is_configured": true, 00:16:51.288 "data_offset": 0, 00:16:51.288 "data_size": 65536 00:16:51.288 }, 00:16:51.288 { 00:16:51.288 "name": "BaseBdev4", 00:16:51.288 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:51.288 "is_configured": true, 00:16:51.288 "data_offset": 0, 00:16:51.288 "data_size": 65536 00:16:51.288 } 00:16:51.288 ] 00:16:51.288 }' 00:16:51.288 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.288 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.856 [2024-12-06 18:14:03.828539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.856 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.856 "name": "Existed_Raid", 00:16:51.856 "aliases": [ 00:16:51.856 "1dd6f8f8-3317-46b3-88a4-723e8b2339a5" 00:16:51.856 ], 00:16:51.856 "product_name": "Raid Volume", 00:16:51.856 "block_size": 512, 00:16:51.856 "num_blocks": 196608, 00:16:51.856 "uuid": "1dd6f8f8-3317-46b3-88a4-723e8b2339a5", 00:16:51.856 "assigned_rate_limits": { 00:16:51.857 "rw_ios_per_sec": 0, 00:16:51.857 "rw_mbytes_per_sec": 0, 00:16:51.857 "r_mbytes_per_sec": 0, 00:16:51.857 "w_mbytes_per_sec": 0 00:16:51.857 }, 00:16:51.857 "claimed": false, 00:16:51.857 "zoned": false, 00:16:51.857 "supported_io_types": { 00:16:51.857 "read": true, 00:16:51.857 "write": true, 00:16:51.857 "unmap": false, 00:16:51.857 "flush": false, 00:16:51.857 "reset": true, 00:16:51.857 "nvme_admin": false, 00:16:51.857 "nvme_io": false, 00:16:51.857 "nvme_io_md": false, 00:16:51.857 "write_zeroes": true, 00:16:51.857 "zcopy": false, 00:16:51.857 "get_zone_info": false, 00:16:51.857 "zone_management": false, 00:16:51.857 "zone_append": false, 00:16:51.857 "compare": false, 00:16:51.857 "compare_and_write": false, 00:16:51.857 "abort": false, 00:16:51.857 "seek_hole": false, 00:16:51.857 "seek_data": false, 00:16:51.857 "copy": false, 00:16:51.857 "nvme_iov_md": false 00:16:51.857 }, 00:16:51.857 "driver_specific": { 00:16:51.857 "raid": { 00:16:51.857 "uuid": "1dd6f8f8-3317-46b3-88a4-723e8b2339a5", 00:16:51.857 "strip_size_kb": 64, 00:16:51.857 "state": "online", 00:16:51.857 "raid_level": "raid5f", 00:16:51.857 "superblock": false, 00:16:51.857 "num_base_bdevs": 4, 00:16:51.857 "num_base_bdevs_discovered": 4, 00:16:51.857 "num_base_bdevs_operational": 4, 00:16:51.857 "base_bdevs_list": [ 00:16:51.857 { 00:16:51.857 "name": "NewBaseBdev", 00:16:51.857 "uuid": "4e21b9ab-298e-4304-bfb3-2752a9e1c3c6", 00:16:51.857 "is_configured": true, 00:16:51.857 "data_offset": 0, 00:16:51.857 "data_size": 65536 00:16:51.857 }, 00:16:51.857 { 00:16:51.857 "name": "BaseBdev2", 00:16:51.857 "uuid": "35749de2-a398-4ac6-885f-4cf7c18bbe79", 00:16:51.857 "is_configured": true, 00:16:51.857 "data_offset": 0, 00:16:51.857 "data_size": 65536 00:16:51.857 }, 00:16:51.857 { 00:16:51.857 "name": "BaseBdev3", 00:16:51.857 "uuid": "9b256565-6c35-4f25-ac99-06336ad859c0", 00:16:51.857 "is_configured": true, 00:16:51.857 "data_offset": 0, 00:16:51.857 "data_size": 65536 00:16:51.857 }, 00:16:51.857 { 00:16:51.857 "name": "BaseBdev4", 00:16:51.857 "uuid": "20961e74-0c4c-4df8-802a-fa55a1228f4b", 00:16:51.857 "is_configured": true, 00:16:51.857 "data_offset": 0, 00:16:51.857 "data_size": 65536 00:16:51.857 } 00:16:51.857 ] 00:16:51.857 } 00:16:51.857 } 00:16:51.857 }' 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:51.857 BaseBdev2 00:16:51.857 BaseBdev3 00:16:51.857 BaseBdev4' 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.857 18:14:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.857 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.857 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.857 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.857 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.857 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:51.857 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.857 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.857 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.116 [2024-12-06 18:14:04.115787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.116 [2024-12-06 18:14:04.115907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.116 [2024-12-06 18:14:04.116019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.116 [2024-12-06 18:14:04.116403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.116 [2024-12-06 18:14:04.116443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83338 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83338 ']' 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83338 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83338 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83338' 00:16:52.116 killing process with pid 83338 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83338 00:16:52.116 [2024-12-06 18:14:04.165594] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.116 18:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83338 00:16:52.682 [2024-12-06 18:14:04.625956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:54.068 00:16:54.068 real 0m12.130s 00:16:54.068 user 0m19.205s 00:16:54.068 sys 0m2.043s 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.068 ************************************ 00:16:54.068 END TEST raid5f_state_function_test 00:16:54.068 ************************************ 00:16:54.068 18:14:05 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:54.068 18:14:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:54.068 18:14:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.068 18:14:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.068 ************************************ 00:16:54.068 START TEST raid5f_state_function_test_sb 00:16:54.068 ************************************ 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:54.068 Process raid pid: 84015 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84015 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84015' 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84015 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84015 ']' 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.068 18:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.068 [2024-12-06 18:14:05.964774] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:16:54.068 [2024-12-06 18:14:05.964978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.068 [2024-12-06 18:14:06.142471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.331 [2024-12-06 18:14:06.256739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.331 [2024-12-06 18:14:06.469960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.331 [2024-12-06 18:14:06.470105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.897 18:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.897 18:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:54.897 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:54.897 18:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.897 18:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.897 [2024-12-06 18:14:06.843805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.897 [2024-12-06 18:14:06.843941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.897 [2024-12-06 18:14:06.843981] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.897 [2024-12-06 18:14:06.844011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.898 [2024-12-06 18:14:06.844022] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.898 [2024-12-06 18:14:06.844033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.898 [2024-12-06 18:14:06.844041] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:54.898 [2024-12-06 18:14:06.844052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.898 "name": "Existed_Raid", 00:16:54.898 "uuid": "5507dddc-33a9-4187-89d5-0975203c0b1d", 00:16:54.898 "strip_size_kb": 64, 00:16:54.898 "state": "configuring", 00:16:54.898 "raid_level": "raid5f", 00:16:54.898 "superblock": true, 00:16:54.898 "num_base_bdevs": 4, 00:16:54.898 "num_base_bdevs_discovered": 0, 00:16:54.898 "num_base_bdevs_operational": 4, 00:16:54.898 "base_bdevs_list": [ 00:16:54.898 { 00:16:54.898 "name": "BaseBdev1", 00:16:54.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.898 "is_configured": false, 00:16:54.898 "data_offset": 0, 00:16:54.898 "data_size": 0 00:16:54.898 }, 00:16:54.898 { 00:16:54.898 "name": "BaseBdev2", 00:16:54.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.898 "is_configured": false, 00:16:54.898 "data_offset": 0, 00:16:54.898 "data_size": 0 00:16:54.898 }, 00:16:54.898 { 00:16:54.898 "name": "BaseBdev3", 00:16:54.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.898 "is_configured": false, 00:16:54.898 "data_offset": 0, 00:16:54.898 "data_size": 0 00:16:54.898 }, 00:16:54.898 { 00:16:54.898 "name": "BaseBdev4", 00:16:54.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.898 "is_configured": false, 00:16:54.898 "data_offset": 0, 00:16:54.898 "data_size": 0 00:16:54.898 } 00:16:54.898 ] 00:16:54.898 }' 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.898 18:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.157 [2024-12-06 18:14:07.278957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.157 [2024-12-06 18:14:07.279077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.157 [2024-12-06 18:14:07.290946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.157 [2024-12-06 18:14:07.291047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.157 [2024-12-06 18:14:07.291091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.157 [2024-12-06 18:14:07.291117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.157 [2024-12-06 18:14:07.291137] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.157 [2024-12-06 18:14:07.291164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.157 [2024-12-06 18:14:07.291192] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:55.157 [2024-12-06 18:14:07.291252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.157 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.417 [2024-12-06 18:14:07.341144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.417 BaseBdev1 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.417 [ 00:16:55.417 { 00:16:55.417 "name": "BaseBdev1", 00:16:55.417 "aliases": [ 00:16:55.417 "7ad4e6bd-711b-43a4-b19c-cf2f768dee46" 00:16:55.417 ], 00:16:55.417 "product_name": "Malloc disk", 00:16:55.417 "block_size": 512, 00:16:55.417 "num_blocks": 65536, 00:16:55.417 "uuid": "7ad4e6bd-711b-43a4-b19c-cf2f768dee46", 00:16:55.417 "assigned_rate_limits": { 00:16:55.417 "rw_ios_per_sec": 0, 00:16:55.417 "rw_mbytes_per_sec": 0, 00:16:55.417 "r_mbytes_per_sec": 0, 00:16:55.417 "w_mbytes_per_sec": 0 00:16:55.417 }, 00:16:55.417 "claimed": true, 00:16:55.417 "claim_type": "exclusive_write", 00:16:55.417 "zoned": false, 00:16:55.417 "supported_io_types": { 00:16:55.417 "read": true, 00:16:55.417 "write": true, 00:16:55.417 "unmap": true, 00:16:55.417 "flush": true, 00:16:55.417 "reset": true, 00:16:55.417 "nvme_admin": false, 00:16:55.417 "nvme_io": false, 00:16:55.417 "nvme_io_md": false, 00:16:55.417 "write_zeroes": true, 00:16:55.417 "zcopy": true, 00:16:55.417 "get_zone_info": false, 00:16:55.417 "zone_management": false, 00:16:55.417 "zone_append": false, 00:16:55.417 "compare": false, 00:16:55.417 "compare_and_write": false, 00:16:55.417 "abort": true, 00:16:55.417 "seek_hole": false, 00:16:55.417 "seek_data": false, 00:16:55.417 "copy": true, 00:16:55.417 "nvme_iov_md": false 00:16:55.417 }, 00:16:55.417 "memory_domains": [ 00:16:55.417 { 00:16:55.417 "dma_device_id": "system", 00:16:55.417 "dma_device_type": 1 00:16:55.417 }, 00:16:55.417 { 00:16:55.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.417 "dma_device_type": 2 00:16:55.417 } 00:16:55.417 ], 00:16:55.417 "driver_specific": {} 00:16:55.417 } 00:16:55.417 ] 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.417 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.417 "name": "Existed_Raid", 00:16:55.417 "uuid": "6cd57e09-7fd2-449f-b6ab-964aaedd847d", 00:16:55.417 "strip_size_kb": 64, 00:16:55.417 "state": "configuring", 00:16:55.417 "raid_level": "raid5f", 00:16:55.417 "superblock": true, 00:16:55.417 "num_base_bdevs": 4, 00:16:55.418 "num_base_bdevs_discovered": 1, 00:16:55.418 "num_base_bdevs_operational": 4, 00:16:55.418 "base_bdevs_list": [ 00:16:55.418 { 00:16:55.418 "name": "BaseBdev1", 00:16:55.418 "uuid": "7ad4e6bd-711b-43a4-b19c-cf2f768dee46", 00:16:55.418 "is_configured": true, 00:16:55.418 "data_offset": 2048, 00:16:55.418 "data_size": 63488 00:16:55.418 }, 00:16:55.418 { 00:16:55.418 "name": "BaseBdev2", 00:16:55.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.418 "is_configured": false, 00:16:55.418 "data_offset": 0, 00:16:55.418 "data_size": 0 00:16:55.418 }, 00:16:55.418 { 00:16:55.418 "name": "BaseBdev3", 00:16:55.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.418 "is_configured": false, 00:16:55.418 "data_offset": 0, 00:16:55.418 "data_size": 0 00:16:55.418 }, 00:16:55.418 { 00:16:55.418 "name": "BaseBdev4", 00:16:55.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.418 "is_configured": false, 00:16:55.418 "data_offset": 0, 00:16:55.418 "data_size": 0 00:16:55.418 } 00:16:55.418 ] 00:16:55.418 }' 00:16:55.418 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.418 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.677 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:55.677 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.677 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.935 [2024-12-06 18:14:07.844366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.935 [2024-12-06 18:14:07.844475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.935 [2024-12-06 18:14:07.852426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.935 [2024-12-06 18:14:07.854488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.935 [2024-12-06 18:14:07.854572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.935 [2024-12-06 18:14:07.854610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.935 [2024-12-06 18:14:07.854639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.935 [2024-12-06 18:14:07.854668] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:55.935 [2024-12-06 18:14:07.854692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.935 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.936 "name": "Existed_Raid", 00:16:55.936 "uuid": "9e3b126d-33e5-4fc7-9b23-5e0589a8ccc7", 00:16:55.936 "strip_size_kb": 64, 00:16:55.936 "state": "configuring", 00:16:55.936 "raid_level": "raid5f", 00:16:55.936 "superblock": true, 00:16:55.936 "num_base_bdevs": 4, 00:16:55.936 "num_base_bdevs_discovered": 1, 00:16:55.936 "num_base_bdevs_operational": 4, 00:16:55.936 "base_bdevs_list": [ 00:16:55.936 { 00:16:55.936 "name": "BaseBdev1", 00:16:55.936 "uuid": "7ad4e6bd-711b-43a4-b19c-cf2f768dee46", 00:16:55.936 "is_configured": true, 00:16:55.936 "data_offset": 2048, 00:16:55.936 "data_size": 63488 00:16:55.936 }, 00:16:55.936 { 00:16:55.936 "name": "BaseBdev2", 00:16:55.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.936 "is_configured": false, 00:16:55.936 "data_offset": 0, 00:16:55.936 "data_size": 0 00:16:55.936 }, 00:16:55.936 { 00:16:55.936 "name": "BaseBdev3", 00:16:55.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.936 "is_configured": false, 00:16:55.936 "data_offset": 0, 00:16:55.936 "data_size": 0 00:16:55.936 }, 00:16:55.936 { 00:16:55.936 "name": "BaseBdev4", 00:16:55.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.936 "is_configured": false, 00:16:55.936 "data_offset": 0, 00:16:55.936 "data_size": 0 00:16:55.936 } 00:16:55.936 ] 00:16:55.936 }' 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.936 18:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.195 [2024-12-06 18:14:08.342442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.195 BaseBdev2 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.195 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 [ 00:16:56.455 { 00:16:56.455 "name": "BaseBdev2", 00:16:56.455 "aliases": [ 00:16:56.455 "1bd7dc6f-b85b-40df-a8ff-8891be89f6c1" 00:16:56.455 ], 00:16:56.455 "product_name": "Malloc disk", 00:16:56.455 "block_size": 512, 00:16:56.455 "num_blocks": 65536, 00:16:56.455 "uuid": "1bd7dc6f-b85b-40df-a8ff-8891be89f6c1", 00:16:56.455 "assigned_rate_limits": { 00:16:56.455 "rw_ios_per_sec": 0, 00:16:56.455 "rw_mbytes_per_sec": 0, 00:16:56.455 "r_mbytes_per_sec": 0, 00:16:56.455 "w_mbytes_per_sec": 0 00:16:56.455 }, 00:16:56.455 "claimed": true, 00:16:56.455 "claim_type": "exclusive_write", 00:16:56.455 "zoned": false, 00:16:56.455 "supported_io_types": { 00:16:56.455 "read": true, 00:16:56.455 "write": true, 00:16:56.455 "unmap": true, 00:16:56.455 "flush": true, 00:16:56.455 "reset": true, 00:16:56.455 "nvme_admin": false, 00:16:56.455 "nvme_io": false, 00:16:56.455 "nvme_io_md": false, 00:16:56.455 "write_zeroes": true, 00:16:56.455 "zcopy": true, 00:16:56.455 "get_zone_info": false, 00:16:56.455 "zone_management": false, 00:16:56.455 "zone_append": false, 00:16:56.455 "compare": false, 00:16:56.455 "compare_and_write": false, 00:16:56.455 "abort": true, 00:16:56.455 "seek_hole": false, 00:16:56.455 "seek_data": false, 00:16:56.455 "copy": true, 00:16:56.455 "nvme_iov_md": false 00:16:56.455 }, 00:16:56.455 "memory_domains": [ 00:16:56.455 { 00:16:56.455 "dma_device_id": "system", 00:16:56.455 "dma_device_type": 1 00:16:56.455 }, 00:16:56.455 { 00:16:56.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.455 "dma_device_type": 2 00:16:56.455 } 00:16:56.455 ], 00:16:56.455 "driver_specific": {} 00:16:56.455 } 00:16:56.455 ] 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.455 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.455 "name": "Existed_Raid", 00:16:56.455 "uuid": "9e3b126d-33e5-4fc7-9b23-5e0589a8ccc7", 00:16:56.455 "strip_size_kb": 64, 00:16:56.455 "state": "configuring", 00:16:56.455 "raid_level": "raid5f", 00:16:56.455 "superblock": true, 00:16:56.455 "num_base_bdevs": 4, 00:16:56.455 "num_base_bdevs_discovered": 2, 00:16:56.455 "num_base_bdevs_operational": 4, 00:16:56.455 "base_bdevs_list": [ 00:16:56.455 { 00:16:56.455 "name": "BaseBdev1", 00:16:56.455 "uuid": "7ad4e6bd-711b-43a4-b19c-cf2f768dee46", 00:16:56.455 "is_configured": true, 00:16:56.455 "data_offset": 2048, 00:16:56.455 "data_size": 63488 00:16:56.455 }, 00:16:56.455 { 00:16:56.455 "name": "BaseBdev2", 00:16:56.455 "uuid": "1bd7dc6f-b85b-40df-a8ff-8891be89f6c1", 00:16:56.455 "is_configured": true, 00:16:56.455 "data_offset": 2048, 00:16:56.455 "data_size": 63488 00:16:56.455 }, 00:16:56.455 { 00:16:56.455 "name": "BaseBdev3", 00:16:56.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.456 "is_configured": false, 00:16:56.456 "data_offset": 0, 00:16:56.456 "data_size": 0 00:16:56.456 }, 00:16:56.456 { 00:16:56.456 "name": "BaseBdev4", 00:16:56.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.456 "is_configured": false, 00:16:56.456 "data_offset": 0, 00:16:56.456 "data_size": 0 00:16:56.456 } 00:16:56.456 ] 00:16:56.456 }' 00:16:56.456 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.456 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.716 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:56.716 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.716 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.976 [2024-12-06 18:14:08.923460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:56.976 BaseBdev3 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.976 [ 00:16:56.976 { 00:16:56.976 "name": "BaseBdev3", 00:16:56.976 "aliases": [ 00:16:56.976 "19fefa5b-6303-4845-a888-e18edfe5d9a8" 00:16:56.976 ], 00:16:56.976 "product_name": "Malloc disk", 00:16:56.976 "block_size": 512, 00:16:56.976 "num_blocks": 65536, 00:16:56.976 "uuid": "19fefa5b-6303-4845-a888-e18edfe5d9a8", 00:16:56.976 "assigned_rate_limits": { 00:16:56.976 "rw_ios_per_sec": 0, 00:16:56.976 "rw_mbytes_per_sec": 0, 00:16:56.976 "r_mbytes_per_sec": 0, 00:16:56.976 "w_mbytes_per_sec": 0 00:16:56.976 }, 00:16:56.976 "claimed": true, 00:16:56.976 "claim_type": "exclusive_write", 00:16:56.976 "zoned": false, 00:16:56.976 "supported_io_types": { 00:16:56.976 "read": true, 00:16:56.976 "write": true, 00:16:56.976 "unmap": true, 00:16:56.976 "flush": true, 00:16:56.976 "reset": true, 00:16:56.976 "nvme_admin": false, 00:16:56.976 "nvme_io": false, 00:16:56.976 "nvme_io_md": false, 00:16:56.976 "write_zeroes": true, 00:16:56.976 "zcopy": true, 00:16:56.976 "get_zone_info": false, 00:16:56.976 "zone_management": false, 00:16:56.976 "zone_append": false, 00:16:56.976 "compare": false, 00:16:56.976 "compare_and_write": false, 00:16:56.976 "abort": true, 00:16:56.976 "seek_hole": false, 00:16:56.976 "seek_data": false, 00:16:56.976 "copy": true, 00:16:56.976 "nvme_iov_md": false 00:16:56.976 }, 00:16:56.976 "memory_domains": [ 00:16:56.976 { 00:16:56.976 "dma_device_id": "system", 00:16:56.976 "dma_device_type": 1 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.976 "dma_device_type": 2 00:16:56.976 } 00:16:56.976 ], 00:16:56.976 "driver_specific": {} 00:16:56.976 } 00:16:56.976 ] 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.976 18:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.976 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.976 "name": "Existed_Raid", 00:16:56.976 "uuid": "9e3b126d-33e5-4fc7-9b23-5e0589a8ccc7", 00:16:56.976 "strip_size_kb": 64, 00:16:56.976 "state": "configuring", 00:16:56.976 "raid_level": "raid5f", 00:16:56.976 "superblock": true, 00:16:56.976 "num_base_bdevs": 4, 00:16:56.976 "num_base_bdevs_discovered": 3, 00:16:56.976 "num_base_bdevs_operational": 4, 00:16:56.976 "base_bdevs_list": [ 00:16:56.976 { 00:16:56.976 "name": "BaseBdev1", 00:16:56.976 "uuid": "7ad4e6bd-711b-43a4-b19c-cf2f768dee46", 00:16:56.976 "is_configured": true, 00:16:56.976 "data_offset": 2048, 00:16:56.976 "data_size": 63488 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "name": "BaseBdev2", 00:16:56.976 "uuid": "1bd7dc6f-b85b-40df-a8ff-8891be89f6c1", 00:16:56.976 "is_configured": true, 00:16:56.976 "data_offset": 2048, 00:16:56.976 "data_size": 63488 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "name": "BaseBdev3", 00:16:56.976 "uuid": "19fefa5b-6303-4845-a888-e18edfe5d9a8", 00:16:56.976 "is_configured": true, 00:16:56.976 "data_offset": 2048, 00:16:56.976 "data_size": 63488 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "name": "BaseBdev4", 00:16:56.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.976 "is_configured": false, 00:16:56.976 "data_offset": 0, 00:16:56.976 "data_size": 0 00:16:56.976 } 00:16:56.976 ] 00:16:56.976 }' 00:16:56.976 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.976 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.236 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:57.236 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.236 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.496 [2024-12-06 18:14:09.417848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.496 [2024-12-06 18:14:09.418309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:57.496 [2024-12-06 18:14:09.418370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:57.496 [2024-12-06 18:14:09.418682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:57.496 BaseBdev4 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.496 [2024-12-06 18:14:09.427471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:57.496 [2024-12-06 18:14:09.427545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:57.496 [2024-12-06 18:14:09.427895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.496 [ 00:16:57.496 { 00:16:57.496 "name": "BaseBdev4", 00:16:57.496 "aliases": [ 00:16:57.496 "40ce99f9-abfe-476e-aecc-71b0d9a400b7" 00:16:57.496 ], 00:16:57.496 "product_name": "Malloc disk", 00:16:57.496 "block_size": 512, 00:16:57.496 "num_blocks": 65536, 00:16:57.496 "uuid": "40ce99f9-abfe-476e-aecc-71b0d9a400b7", 00:16:57.496 "assigned_rate_limits": { 00:16:57.496 "rw_ios_per_sec": 0, 00:16:57.496 "rw_mbytes_per_sec": 0, 00:16:57.496 "r_mbytes_per_sec": 0, 00:16:57.496 "w_mbytes_per_sec": 0 00:16:57.496 }, 00:16:57.496 "claimed": true, 00:16:57.496 "claim_type": "exclusive_write", 00:16:57.496 "zoned": false, 00:16:57.496 "supported_io_types": { 00:16:57.496 "read": true, 00:16:57.496 "write": true, 00:16:57.496 "unmap": true, 00:16:57.496 "flush": true, 00:16:57.496 "reset": true, 00:16:57.496 "nvme_admin": false, 00:16:57.496 "nvme_io": false, 00:16:57.496 "nvme_io_md": false, 00:16:57.496 "write_zeroes": true, 00:16:57.496 "zcopy": true, 00:16:57.496 "get_zone_info": false, 00:16:57.496 "zone_management": false, 00:16:57.496 "zone_append": false, 00:16:57.496 "compare": false, 00:16:57.496 "compare_and_write": false, 00:16:57.496 "abort": true, 00:16:57.496 "seek_hole": false, 00:16:57.496 "seek_data": false, 00:16:57.496 "copy": true, 00:16:57.496 "nvme_iov_md": false 00:16:57.496 }, 00:16:57.496 "memory_domains": [ 00:16:57.496 { 00:16:57.496 "dma_device_id": "system", 00:16:57.496 "dma_device_type": 1 00:16:57.496 }, 00:16:57.496 { 00:16:57.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.496 "dma_device_type": 2 00:16:57.496 } 00:16:57.496 ], 00:16:57.496 "driver_specific": {} 00:16:57.496 } 00:16:57.496 ] 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.496 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.496 "name": "Existed_Raid", 00:16:57.497 "uuid": "9e3b126d-33e5-4fc7-9b23-5e0589a8ccc7", 00:16:57.497 "strip_size_kb": 64, 00:16:57.497 "state": "online", 00:16:57.497 "raid_level": "raid5f", 00:16:57.497 "superblock": true, 00:16:57.497 "num_base_bdevs": 4, 00:16:57.497 "num_base_bdevs_discovered": 4, 00:16:57.497 "num_base_bdevs_operational": 4, 00:16:57.497 "base_bdevs_list": [ 00:16:57.497 { 00:16:57.497 "name": "BaseBdev1", 00:16:57.497 "uuid": "7ad4e6bd-711b-43a4-b19c-cf2f768dee46", 00:16:57.497 "is_configured": true, 00:16:57.497 "data_offset": 2048, 00:16:57.497 "data_size": 63488 00:16:57.497 }, 00:16:57.497 { 00:16:57.497 "name": "BaseBdev2", 00:16:57.497 "uuid": "1bd7dc6f-b85b-40df-a8ff-8891be89f6c1", 00:16:57.497 "is_configured": true, 00:16:57.497 "data_offset": 2048, 00:16:57.497 "data_size": 63488 00:16:57.497 }, 00:16:57.497 { 00:16:57.497 "name": "BaseBdev3", 00:16:57.497 "uuid": "19fefa5b-6303-4845-a888-e18edfe5d9a8", 00:16:57.497 "is_configured": true, 00:16:57.497 "data_offset": 2048, 00:16:57.497 "data_size": 63488 00:16:57.497 }, 00:16:57.497 { 00:16:57.497 "name": "BaseBdev4", 00:16:57.497 "uuid": "40ce99f9-abfe-476e-aecc-71b0d9a400b7", 00:16:57.497 "is_configured": true, 00:16:57.497 "data_offset": 2048, 00:16:57.497 "data_size": 63488 00:16:57.497 } 00:16:57.497 ] 00:16:57.497 }' 00:16:57.497 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.497 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.756 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:57.756 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:57.756 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:57.756 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.016 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.016 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.016 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.016 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:58.016 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.016 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.016 [2024-12-06 18:14:09.937175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.016 18:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.016 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.016 "name": "Existed_Raid", 00:16:58.016 "aliases": [ 00:16:58.016 "9e3b126d-33e5-4fc7-9b23-5e0589a8ccc7" 00:16:58.016 ], 00:16:58.016 "product_name": "Raid Volume", 00:16:58.016 "block_size": 512, 00:16:58.016 "num_blocks": 190464, 00:16:58.016 "uuid": "9e3b126d-33e5-4fc7-9b23-5e0589a8ccc7", 00:16:58.016 "assigned_rate_limits": { 00:16:58.016 "rw_ios_per_sec": 0, 00:16:58.016 "rw_mbytes_per_sec": 0, 00:16:58.016 "r_mbytes_per_sec": 0, 00:16:58.016 "w_mbytes_per_sec": 0 00:16:58.016 }, 00:16:58.016 "claimed": false, 00:16:58.016 "zoned": false, 00:16:58.016 "supported_io_types": { 00:16:58.016 "read": true, 00:16:58.016 "write": true, 00:16:58.016 "unmap": false, 00:16:58.016 "flush": false, 00:16:58.016 "reset": true, 00:16:58.016 "nvme_admin": false, 00:16:58.016 "nvme_io": false, 00:16:58.016 "nvme_io_md": false, 00:16:58.016 "write_zeroes": true, 00:16:58.016 "zcopy": false, 00:16:58.016 "get_zone_info": false, 00:16:58.016 "zone_management": false, 00:16:58.016 "zone_append": false, 00:16:58.016 "compare": false, 00:16:58.016 "compare_and_write": false, 00:16:58.016 "abort": false, 00:16:58.016 "seek_hole": false, 00:16:58.016 "seek_data": false, 00:16:58.016 "copy": false, 00:16:58.016 "nvme_iov_md": false 00:16:58.016 }, 00:16:58.016 "driver_specific": { 00:16:58.016 "raid": { 00:16:58.016 "uuid": "9e3b126d-33e5-4fc7-9b23-5e0589a8ccc7", 00:16:58.016 "strip_size_kb": 64, 00:16:58.016 "state": "online", 00:16:58.016 "raid_level": "raid5f", 00:16:58.016 "superblock": true, 00:16:58.016 "num_base_bdevs": 4, 00:16:58.016 "num_base_bdevs_discovered": 4, 00:16:58.016 "num_base_bdevs_operational": 4, 00:16:58.016 "base_bdevs_list": [ 00:16:58.016 { 00:16:58.016 "name": "BaseBdev1", 00:16:58.016 "uuid": "7ad4e6bd-711b-43a4-b19c-cf2f768dee46", 00:16:58.016 "is_configured": true, 00:16:58.016 "data_offset": 2048, 00:16:58.016 "data_size": 63488 00:16:58.016 }, 00:16:58.016 { 00:16:58.016 "name": "BaseBdev2", 00:16:58.016 "uuid": "1bd7dc6f-b85b-40df-a8ff-8891be89f6c1", 00:16:58.016 "is_configured": true, 00:16:58.016 "data_offset": 2048, 00:16:58.016 "data_size": 63488 00:16:58.016 }, 00:16:58.016 { 00:16:58.016 "name": "BaseBdev3", 00:16:58.016 "uuid": "19fefa5b-6303-4845-a888-e18edfe5d9a8", 00:16:58.016 "is_configured": true, 00:16:58.016 "data_offset": 2048, 00:16:58.016 "data_size": 63488 00:16:58.016 }, 00:16:58.016 { 00:16:58.016 "name": "BaseBdev4", 00:16:58.016 "uuid": "40ce99f9-abfe-476e-aecc-71b0d9a400b7", 00:16:58.016 "is_configured": true, 00:16:58.016 "data_offset": 2048, 00:16:58.016 "data_size": 63488 00:16:58.016 } 00:16:58.016 ] 00:16:58.016 } 00:16:58.016 } 00:16:58.016 }' 00:16:58.016 18:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:58.016 BaseBdev2 00:16:58.016 BaseBdev3 00:16:58.016 BaseBdev4' 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.016 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.278 [2024-12-06 18:14:10.264367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.278 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.278 "name": "Existed_Raid", 00:16:58.278 "uuid": "9e3b126d-33e5-4fc7-9b23-5e0589a8ccc7", 00:16:58.278 "strip_size_kb": 64, 00:16:58.278 "state": "online", 00:16:58.278 "raid_level": "raid5f", 00:16:58.278 "superblock": true, 00:16:58.278 "num_base_bdevs": 4, 00:16:58.278 "num_base_bdevs_discovered": 3, 00:16:58.278 "num_base_bdevs_operational": 3, 00:16:58.278 "base_bdevs_list": [ 00:16:58.278 { 00:16:58.278 "name": null, 00:16:58.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.278 "is_configured": false, 00:16:58.278 "data_offset": 0, 00:16:58.278 "data_size": 63488 00:16:58.278 }, 00:16:58.278 { 00:16:58.278 "name": "BaseBdev2", 00:16:58.278 "uuid": "1bd7dc6f-b85b-40df-a8ff-8891be89f6c1", 00:16:58.278 "is_configured": true, 00:16:58.278 "data_offset": 2048, 00:16:58.278 "data_size": 63488 00:16:58.278 }, 00:16:58.278 { 00:16:58.278 "name": "BaseBdev3", 00:16:58.278 "uuid": "19fefa5b-6303-4845-a888-e18edfe5d9a8", 00:16:58.278 "is_configured": true, 00:16:58.278 "data_offset": 2048, 00:16:58.278 "data_size": 63488 00:16:58.278 }, 00:16:58.278 { 00:16:58.278 "name": "BaseBdev4", 00:16:58.278 "uuid": "40ce99f9-abfe-476e-aecc-71b0d9a400b7", 00:16:58.279 "is_configured": true, 00:16:58.279 "data_offset": 2048, 00:16:58.279 "data_size": 63488 00:16:58.279 } 00:16:58.279 ] 00:16:58.279 }' 00:16:58.279 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.279 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.848 18:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.848 [2024-12-06 18:14:10.921863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:58.848 [2024-12-06 18:14:10.922155] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.106 [2024-12-06 18:14:11.037394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.106 [2024-12-06 18:14:11.089401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.106 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.106 [2024-12-06 18:14:11.256292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:59.106 [2024-12-06 18:14:11.256361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.372 BaseBdev2 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.372 [ 00:16:59.372 { 00:16:59.372 "name": "BaseBdev2", 00:16:59.372 "aliases": [ 00:16:59.372 "dd1938fd-0293-4d02-aa3d-d0ede70c7724" 00:16:59.372 ], 00:16:59.372 "product_name": "Malloc disk", 00:16:59.372 "block_size": 512, 00:16:59.372 "num_blocks": 65536, 00:16:59.372 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:16:59.372 "assigned_rate_limits": { 00:16:59.372 "rw_ios_per_sec": 0, 00:16:59.372 "rw_mbytes_per_sec": 0, 00:16:59.372 "r_mbytes_per_sec": 0, 00:16:59.372 "w_mbytes_per_sec": 0 00:16:59.372 }, 00:16:59.372 "claimed": false, 00:16:59.372 "zoned": false, 00:16:59.372 "supported_io_types": { 00:16:59.372 "read": true, 00:16:59.372 "write": true, 00:16:59.372 "unmap": true, 00:16:59.372 "flush": true, 00:16:59.372 "reset": true, 00:16:59.372 "nvme_admin": false, 00:16:59.372 "nvme_io": false, 00:16:59.372 "nvme_io_md": false, 00:16:59.372 "write_zeroes": true, 00:16:59.372 "zcopy": true, 00:16:59.372 "get_zone_info": false, 00:16:59.372 "zone_management": false, 00:16:59.372 "zone_append": false, 00:16:59.372 "compare": false, 00:16:59.372 "compare_and_write": false, 00:16:59.372 "abort": true, 00:16:59.372 "seek_hole": false, 00:16:59.372 "seek_data": false, 00:16:59.372 "copy": true, 00:16:59.372 "nvme_iov_md": false 00:16:59.372 }, 00:16:59.372 "memory_domains": [ 00:16:59.372 { 00:16:59.372 "dma_device_id": "system", 00:16:59.372 "dma_device_type": 1 00:16:59.372 }, 00:16:59.372 { 00:16:59.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.372 "dma_device_type": 2 00:16:59.372 } 00:16:59.372 ], 00:16:59.372 "driver_specific": {} 00:16:59.372 } 00:16:59.372 ] 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.372 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.642 BaseBdev3 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.642 [ 00:16:59.642 { 00:16:59.642 "name": "BaseBdev3", 00:16:59.642 "aliases": [ 00:16:59.642 "e9cfbd8d-0c20-443c-a893-8b8907d3305e" 00:16:59.642 ], 00:16:59.642 "product_name": "Malloc disk", 00:16:59.642 "block_size": 512, 00:16:59.642 "num_blocks": 65536, 00:16:59.642 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:16:59.642 "assigned_rate_limits": { 00:16:59.642 "rw_ios_per_sec": 0, 00:16:59.642 "rw_mbytes_per_sec": 0, 00:16:59.642 "r_mbytes_per_sec": 0, 00:16:59.642 "w_mbytes_per_sec": 0 00:16:59.642 }, 00:16:59.642 "claimed": false, 00:16:59.642 "zoned": false, 00:16:59.642 "supported_io_types": { 00:16:59.642 "read": true, 00:16:59.642 "write": true, 00:16:59.642 "unmap": true, 00:16:59.642 "flush": true, 00:16:59.642 "reset": true, 00:16:59.642 "nvme_admin": false, 00:16:59.642 "nvme_io": false, 00:16:59.642 "nvme_io_md": false, 00:16:59.642 "write_zeroes": true, 00:16:59.642 "zcopy": true, 00:16:59.642 "get_zone_info": false, 00:16:59.642 "zone_management": false, 00:16:59.642 "zone_append": false, 00:16:59.642 "compare": false, 00:16:59.642 "compare_and_write": false, 00:16:59.642 "abort": true, 00:16:59.642 "seek_hole": false, 00:16:59.642 "seek_data": false, 00:16:59.642 "copy": true, 00:16:59.642 "nvme_iov_md": false 00:16:59.642 }, 00:16:59.642 "memory_domains": [ 00:16:59.642 { 00:16:59.642 "dma_device_id": "system", 00:16:59.642 "dma_device_type": 1 00:16:59.642 }, 00:16:59.642 { 00:16:59.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.642 "dma_device_type": 2 00:16:59.642 } 00:16:59.642 ], 00:16:59.642 "driver_specific": {} 00:16:59.642 } 00:16:59.642 ] 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.642 BaseBdev4 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.642 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.642 [ 00:16:59.642 { 00:16:59.642 "name": "BaseBdev4", 00:16:59.642 "aliases": [ 00:16:59.642 "dbea3945-4981-4c00-89b1-e045172854e3" 00:16:59.642 ], 00:16:59.642 "product_name": "Malloc disk", 00:16:59.642 "block_size": 512, 00:16:59.642 "num_blocks": 65536, 00:16:59.642 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:16:59.642 "assigned_rate_limits": { 00:16:59.642 "rw_ios_per_sec": 0, 00:16:59.642 "rw_mbytes_per_sec": 0, 00:16:59.642 "r_mbytes_per_sec": 0, 00:16:59.642 "w_mbytes_per_sec": 0 00:16:59.642 }, 00:16:59.642 "claimed": false, 00:16:59.642 "zoned": false, 00:16:59.642 "supported_io_types": { 00:16:59.642 "read": true, 00:16:59.642 "write": true, 00:16:59.642 "unmap": true, 00:16:59.642 "flush": true, 00:16:59.642 "reset": true, 00:16:59.642 "nvme_admin": false, 00:16:59.642 "nvme_io": false, 00:16:59.642 "nvme_io_md": false, 00:16:59.643 "write_zeroes": true, 00:16:59.643 "zcopy": true, 00:16:59.643 "get_zone_info": false, 00:16:59.643 "zone_management": false, 00:16:59.643 "zone_append": false, 00:16:59.643 "compare": false, 00:16:59.643 "compare_and_write": false, 00:16:59.643 "abort": true, 00:16:59.643 "seek_hole": false, 00:16:59.643 "seek_data": false, 00:16:59.643 "copy": true, 00:16:59.643 "nvme_iov_md": false 00:16:59.643 }, 00:16:59.643 "memory_domains": [ 00:16:59.643 { 00:16:59.643 "dma_device_id": "system", 00:16:59.643 "dma_device_type": 1 00:16:59.643 }, 00:16:59.643 { 00:16:59.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.643 "dma_device_type": 2 00:16:59.643 } 00:16:59.643 ], 00:16:59.643 "driver_specific": {} 00:16:59.643 } 00:16:59.643 ] 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.643 [2024-12-06 18:14:11.688583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:59.643 [2024-12-06 18:14:11.688712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:59.643 [2024-12-06 18:14:11.688776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.643 [2024-12-06 18:14:11.690997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.643 [2024-12-06 18:14:11.691135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.643 "name": "Existed_Raid", 00:16:59.643 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:16:59.643 "strip_size_kb": 64, 00:16:59.643 "state": "configuring", 00:16:59.643 "raid_level": "raid5f", 00:16:59.643 "superblock": true, 00:16:59.643 "num_base_bdevs": 4, 00:16:59.643 "num_base_bdevs_discovered": 3, 00:16:59.643 "num_base_bdevs_operational": 4, 00:16:59.643 "base_bdevs_list": [ 00:16:59.643 { 00:16:59.643 "name": "BaseBdev1", 00:16:59.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.643 "is_configured": false, 00:16:59.643 "data_offset": 0, 00:16:59.643 "data_size": 0 00:16:59.643 }, 00:16:59.643 { 00:16:59.643 "name": "BaseBdev2", 00:16:59.643 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:16:59.643 "is_configured": true, 00:16:59.643 "data_offset": 2048, 00:16:59.643 "data_size": 63488 00:16:59.643 }, 00:16:59.643 { 00:16:59.643 "name": "BaseBdev3", 00:16:59.643 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:16:59.643 "is_configured": true, 00:16:59.643 "data_offset": 2048, 00:16:59.643 "data_size": 63488 00:16:59.643 }, 00:16:59.643 { 00:16:59.643 "name": "BaseBdev4", 00:16:59.643 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:16:59.643 "is_configured": true, 00:16:59.643 "data_offset": 2048, 00:16:59.643 "data_size": 63488 00:16:59.643 } 00:16:59.643 ] 00:16:59.643 }' 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.643 18:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.213 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:00.213 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.213 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.213 [2024-12-06 18:14:12.147794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.214 "name": "Existed_Raid", 00:17:00.214 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:17:00.214 "strip_size_kb": 64, 00:17:00.214 "state": "configuring", 00:17:00.214 "raid_level": "raid5f", 00:17:00.214 "superblock": true, 00:17:00.214 "num_base_bdevs": 4, 00:17:00.214 "num_base_bdevs_discovered": 2, 00:17:00.214 "num_base_bdevs_operational": 4, 00:17:00.214 "base_bdevs_list": [ 00:17:00.214 { 00:17:00.214 "name": "BaseBdev1", 00:17:00.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.214 "is_configured": false, 00:17:00.214 "data_offset": 0, 00:17:00.214 "data_size": 0 00:17:00.214 }, 00:17:00.214 { 00:17:00.214 "name": null, 00:17:00.214 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:17:00.214 "is_configured": false, 00:17:00.214 "data_offset": 0, 00:17:00.214 "data_size": 63488 00:17:00.214 }, 00:17:00.214 { 00:17:00.214 "name": "BaseBdev3", 00:17:00.214 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:17:00.214 "is_configured": true, 00:17:00.214 "data_offset": 2048, 00:17:00.214 "data_size": 63488 00:17:00.214 }, 00:17:00.214 { 00:17:00.214 "name": "BaseBdev4", 00:17:00.214 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:17:00.214 "is_configured": true, 00:17:00.214 "data_offset": 2048, 00:17:00.214 "data_size": 63488 00:17:00.214 } 00:17:00.214 ] 00:17:00.214 }' 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.214 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.474 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:00.474 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.474 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.474 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.474 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.735 [2024-12-06 18:14:12.698369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.735 BaseBdev1 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.735 [ 00:17:00.735 { 00:17:00.735 "name": "BaseBdev1", 00:17:00.735 "aliases": [ 00:17:00.735 "3269c04a-215b-468f-952b-ca487ae53ade" 00:17:00.735 ], 00:17:00.735 "product_name": "Malloc disk", 00:17:00.735 "block_size": 512, 00:17:00.735 "num_blocks": 65536, 00:17:00.735 "uuid": "3269c04a-215b-468f-952b-ca487ae53ade", 00:17:00.735 "assigned_rate_limits": { 00:17:00.735 "rw_ios_per_sec": 0, 00:17:00.735 "rw_mbytes_per_sec": 0, 00:17:00.735 "r_mbytes_per_sec": 0, 00:17:00.735 "w_mbytes_per_sec": 0 00:17:00.735 }, 00:17:00.735 "claimed": true, 00:17:00.735 "claim_type": "exclusive_write", 00:17:00.735 "zoned": false, 00:17:00.735 "supported_io_types": { 00:17:00.735 "read": true, 00:17:00.735 "write": true, 00:17:00.735 "unmap": true, 00:17:00.735 "flush": true, 00:17:00.735 "reset": true, 00:17:00.735 "nvme_admin": false, 00:17:00.735 "nvme_io": false, 00:17:00.735 "nvme_io_md": false, 00:17:00.735 "write_zeroes": true, 00:17:00.735 "zcopy": true, 00:17:00.735 "get_zone_info": false, 00:17:00.735 "zone_management": false, 00:17:00.735 "zone_append": false, 00:17:00.735 "compare": false, 00:17:00.735 "compare_and_write": false, 00:17:00.735 "abort": true, 00:17:00.735 "seek_hole": false, 00:17:00.735 "seek_data": false, 00:17:00.735 "copy": true, 00:17:00.735 "nvme_iov_md": false 00:17:00.735 }, 00:17:00.735 "memory_domains": [ 00:17:00.735 { 00:17:00.735 "dma_device_id": "system", 00:17:00.735 "dma_device_type": 1 00:17:00.735 }, 00:17:00.735 { 00:17:00.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.735 "dma_device_type": 2 00:17:00.735 } 00:17:00.735 ], 00:17:00.735 "driver_specific": {} 00:17:00.735 } 00:17:00.735 ] 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.735 "name": "Existed_Raid", 00:17:00.735 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:17:00.735 "strip_size_kb": 64, 00:17:00.735 "state": "configuring", 00:17:00.735 "raid_level": "raid5f", 00:17:00.735 "superblock": true, 00:17:00.735 "num_base_bdevs": 4, 00:17:00.735 "num_base_bdevs_discovered": 3, 00:17:00.735 "num_base_bdevs_operational": 4, 00:17:00.735 "base_bdevs_list": [ 00:17:00.735 { 00:17:00.735 "name": "BaseBdev1", 00:17:00.735 "uuid": "3269c04a-215b-468f-952b-ca487ae53ade", 00:17:00.735 "is_configured": true, 00:17:00.735 "data_offset": 2048, 00:17:00.735 "data_size": 63488 00:17:00.735 }, 00:17:00.735 { 00:17:00.735 "name": null, 00:17:00.735 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:17:00.735 "is_configured": false, 00:17:00.735 "data_offset": 0, 00:17:00.735 "data_size": 63488 00:17:00.735 }, 00:17:00.735 { 00:17:00.735 "name": "BaseBdev3", 00:17:00.735 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:17:00.735 "is_configured": true, 00:17:00.735 "data_offset": 2048, 00:17:00.735 "data_size": 63488 00:17:00.735 }, 00:17:00.735 { 00:17:00.735 "name": "BaseBdev4", 00:17:00.735 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:17:00.735 "is_configured": true, 00:17:00.735 "data_offset": 2048, 00:17:00.735 "data_size": 63488 00:17:00.735 } 00:17:00.735 ] 00:17:00.735 }' 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.735 18:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.304 [2024-12-06 18:14:13.289533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.304 "name": "Existed_Raid", 00:17:01.304 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:17:01.304 "strip_size_kb": 64, 00:17:01.304 "state": "configuring", 00:17:01.304 "raid_level": "raid5f", 00:17:01.304 "superblock": true, 00:17:01.304 "num_base_bdevs": 4, 00:17:01.304 "num_base_bdevs_discovered": 2, 00:17:01.304 "num_base_bdevs_operational": 4, 00:17:01.304 "base_bdevs_list": [ 00:17:01.304 { 00:17:01.304 "name": "BaseBdev1", 00:17:01.304 "uuid": "3269c04a-215b-468f-952b-ca487ae53ade", 00:17:01.304 "is_configured": true, 00:17:01.304 "data_offset": 2048, 00:17:01.304 "data_size": 63488 00:17:01.304 }, 00:17:01.304 { 00:17:01.304 "name": null, 00:17:01.304 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:17:01.304 "is_configured": false, 00:17:01.304 "data_offset": 0, 00:17:01.304 "data_size": 63488 00:17:01.304 }, 00:17:01.304 { 00:17:01.304 "name": null, 00:17:01.304 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:17:01.304 "is_configured": false, 00:17:01.304 "data_offset": 0, 00:17:01.304 "data_size": 63488 00:17:01.304 }, 00:17:01.304 { 00:17:01.304 "name": "BaseBdev4", 00:17:01.304 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:17:01.304 "is_configured": true, 00:17:01.304 "data_offset": 2048, 00:17:01.304 "data_size": 63488 00:17:01.304 } 00:17:01.304 ] 00:17:01.304 }' 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.304 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.870 [2024-12-06 18:14:13.844593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.870 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.871 "name": "Existed_Raid", 00:17:01.871 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:17:01.871 "strip_size_kb": 64, 00:17:01.871 "state": "configuring", 00:17:01.871 "raid_level": "raid5f", 00:17:01.871 "superblock": true, 00:17:01.871 "num_base_bdevs": 4, 00:17:01.871 "num_base_bdevs_discovered": 3, 00:17:01.871 "num_base_bdevs_operational": 4, 00:17:01.871 "base_bdevs_list": [ 00:17:01.871 { 00:17:01.871 "name": "BaseBdev1", 00:17:01.871 "uuid": "3269c04a-215b-468f-952b-ca487ae53ade", 00:17:01.871 "is_configured": true, 00:17:01.871 "data_offset": 2048, 00:17:01.871 "data_size": 63488 00:17:01.871 }, 00:17:01.871 { 00:17:01.871 "name": null, 00:17:01.871 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:17:01.871 "is_configured": false, 00:17:01.871 "data_offset": 0, 00:17:01.871 "data_size": 63488 00:17:01.871 }, 00:17:01.871 { 00:17:01.871 "name": "BaseBdev3", 00:17:01.871 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:17:01.871 "is_configured": true, 00:17:01.871 "data_offset": 2048, 00:17:01.871 "data_size": 63488 00:17:01.871 }, 00:17:01.871 { 00:17:01.871 "name": "BaseBdev4", 00:17:01.871 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:17:01.871 "is_configured": true, 00:17:01.871 "data_offset": 2048, 00:17:01.871 "data_size": 63488 00:17:01.871 } 00:17:01.871 ] 00:17:01.871 }' 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.871 18:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.438 [2024-12-06 18:14:14.351851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.438 "name": "Existed_Raid", 00:17:02.438 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:17:02.438 "strip_size_kb": 64, 00:17:02.438 "state": "configuring", 00:17:02.438 "raid_level": "raid5f", 00:17:02.438 "superblock": true, 00:17:02.438 "num_base_bdevs": 4, 00:17:02.438 "num_base_bdevs_discovered": 2, 00:17:02.438 "num_base_bdevs_operational": 4, 00:17:02.438 "base_bdevs_list": [ 00:17:02.438 { 00:17:02.438 "name": null, 00:17:02.438 "uuid": "3269c04a-215b-468f-952b-ca487ae53ade", 00:17:02.438 "is_configured": false, 00:17:02.438 "data_offset": 0, 00:17:02.438 "data_size": 63488 00:17:02.438 }, 00:17:02.438 { 00:17:02.438 "name": null, 00:17:02.438 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:17:02.438 "is_configured": false, 00:17:02.438 "data_offset": 0, 00:17:02.438 "data_size": 63488 00:17:02.438 }, 00:17:02.438 { 00:17:02.438 "name": "BaseBdev3", 00:17:02.438 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:17:02.438 "is_configured": true, 00:17:02.438 "data_offset": 2048, 00:17:02.438 "data_size": 63488 00:17:02.438 }, 00:17:02.438 { 00:17:02.438 "name": "BaseBdev4", 00:17:02.438 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:17:02.438 "is_configured": true, 00:17:02.438 "data_offset": 2048, 00:17:02.438 "data_size": 63488 00:17:02.438 } 00:17:02.438 ] 00:17:02.438 }' 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.438 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.007 [2024-12-06 18:14:14.975612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.007 18:14:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.007 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.007 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.007 "name": "Existed_Raid", 00:17:03.007 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:17:03.007 "strip_size_kb": 64, 00:17:03.007 "state": "configuring", 00:17:03.007 "raid_level": "raid5f", 00:17:03.007 "superblock": true, 00:17:03.007 "num_base_bdevs": 4, 00:17:03.007 "num_base_bdevs_discovered": 3, 00:17:03.007 "num_base_bdevs_operational": 4, 00:17:03.007 "base_bdevs_list": [ 00:17:03.007 { 00:17:03.007 "name": null, 00:17:03.007 "uuid": "3269c04a-215b-468f-952b-ca487ae53ade", 00:17:03.007 "is_configured": false, 00:17:03.007 "data_offset": 0, 00:17:03.007 "data_size": 63488 00:17:03.007 }, 00:17:03.007 { 00:17:03.007 "name": "BaseBdev2", 00:17:03.007 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:17:03.007 "is_configured": true, 00:17:03.007 "data_offset": 2048, 00:17:03.007 "data_size": 63488 00:17:03.007 }, 00:17:03.007 { 00:17:03.007 "name": "BaseBdev3", 00:17:03.007 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:17:03.007 "is_configured": true, 00:17:03.007 "data_offset": 2048, 00:17:03.007 "data_size": 63488 00:17:03.007 }, 00:17:03.007 { 00:17:03.007 "name": "BaseBdev4", 00:17:03.007 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:17:03.007 "is_configured": true, 00:17:03.007 "data_offset": 2048, 00:17:03.007 "data_size": 63488 00:17:03.007 } 00:17:03.007 ] 00:17:03.007 }' 00:17:03.008 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.008 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.573 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.573 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:03.573 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.573 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.573 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.573 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:03.573 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3269c04a-215b-468f-952b-ca487ae53ade 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.574 [2024-12-06 18:14:15.588926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:03.574 [2024-12-06 18:14:15.589303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:03.574 [2024-12-06 18:14:15.589322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:03.574 [2024-12-06 18:14:15.589615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:03.574 NewBaseBdev 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.574 [2024-12-06 18:14:15.598057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:03.574 [2024-12-06 18:14:15.598141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:03.574 [2024-12-06 18:14:15.598474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.574 [ 00:17:03.574 { 00:17:03.574 "name": "NewBaseBdev", 00:17:03.574 "aliases": [ 00:17:03.574 "3269c04a-215b-468f-952b-ca487ae53ade" 00:17:03.574 ], 00:17:03.574 "product_name": "Malloc disk", 00:17:03.574 "block_size": 512, 00:17:03.574 "num_blocks": 65536, 00:17:03.574 "uuid": "3269c04a-215b-468f-952b-ca487ae53ade", 00:17:03.574 "assigned_rate_limits": { 00:17:03.574 "rw_ios_per_sec": 0, 00:17:03.574 "rw_mbytes_per_sec": 0, 00:17:03.574 "r_mbytes_per_sec": 0, 00:17:03.574 "w_mbytes_per_sec": 0 00:17:03.574 }, 00:17:03.574 "claimed": true, 00:17:03.574 "claim_type": "exclusive_write", 00:17:03.574 "zoned": false, 00:17:03.574 "supported_io_types": { 00:17:03.574 "read": true, 00:17:03.574 "write": true, 00:17:03.574 "unmap": true, 00:17:03.574 "flush": true, 00:17:03.574 "reset": true, 00:17:03.574 "nvme_admin": false, 00:17:03.574 "nvme_io": false, 00:17:03.574 "nvme_io_md": false, 00:17:03.574 "write_zeroes": true, 00:17:03.574 "zcopy": true, 00:17:03.574 "get_zone_info": false, 00:17:03.574 "zone_management": false, 00:17:03.574 "zone_append": false, 00:17:03.574 "compare": false, 00:17:03.574 "compare_and_write": false, 00:17:03.574 "abort": true, 00:17:03.574 "seek_hole": false, 00:17:03.574 "seek_data": false, 00:17:03.574 "copy": true, 00:17:03.574 "nvme_iov_md": false 00:17:03.574 }, 00:17:03.574 "memory_domains": [ 00:17:03.574 { 00:17:03.574 "dma_device_id": "system", 00:17:03.574 "dma_device_type": 1 00:17:03.574 }, 00:17:03.574 { 00:17:03.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.574 "dma_device_type": 2 00:17:03.574 } 00:17:03.574 ], 00:17:03.574 "driver_specific": {} 00:17:03.574 } 00:17:03.574 ] 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.574 "name": "Existed_Raid", 00:17:03.574 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:17:03.574 "strip_size_kb": 64, 00:17:03.574 "state": "online", 00:17:03.574 "raid_level": "raid5f", 00:17:03.574 "superblock": true, 00:17:03.574 "num_base_bdevs": 4, 00:17:03.574 "num_base_bdevs_discovered": 4, 00:17:03.574 "num_base_bdevs_operational": 4, 00:17:03.574 "base_bdevs_list": [ 00:17:03.574 { 00:17:03.574 "name": "NewBaseBdev", 00:17:03.574 "uuid": "3269c04a-215b-468f-952b-ca487ae53ade", 00:17:03.574 "is_configured": true, 00:17:03.574 "data_offset": 2048, 00:17:03.574 "data_size": 63488 00:17:03.574 }, 00:17:03.574 { 00:17:03.574 "name": "BaseBdev2", 00:17:03.574 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:17:03.574 "is_configured": true, 00:17:03.574 "data_offset": 2048, 00:17:03.574 "data_size": 63488 00:17:03.574 }, 00:17:03.574 { 00:17:03.574 "name": "BaseBdev3", 00:17:03.574 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:17:03.574 "is_configured": true, 00:17:03.574 "data_offset": 2048, 00:17:03.574 "data_size": 63488 00:17:03.574 }, 00:17:03.574 { 00:17:03.574 "name": "BaseBdev4", 00:17:03.574 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:17:03.574 "is_configured": true, 00:17:03.574 "data_offset": 2048, 00:17:03.574 "data_size": 63488 00:17:03.574 } 00:17:03.574 ] 00:17:03.574 }' 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.574 18:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:04.141 [2024-12-06 18:14:16.075664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.141 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:04.141 "name": "Existed_Raid", 00:17:04.141 "aliases": [ 00:17:04.141 "872e8c03-e614-4b70-a4dc-3a3d22024f01" 00:17:04.141 ], 00:17:04.141 "product_name": "Raid Volume", 00:17:04.141 "block_size": 512, 00:17:04.141 "num_blocks": 190464, 00:17:04.141 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:17:04.141 "assigned_rate_limits": { 00:17:04.141 "rw_ios_per_sec": 0, 00:17:04.141 "rw_mbytes_per_sec": 0, 00:17:04.141 "r_mbytes_per_sec": 0, 00:17:04.141 "w_mbytes_per_sec": 0 00:17:04.141 }, 00:17:04.141 "claimed": false, 00:17:04.141 "zoned": false, 00:17:04.141 "supported_io_types": { 00:17:04.141 "read": true, 00:17:04.141 "write": true, 00:17:04.141 "unmap": false, 00:17:04.141 "flush": false, 00:17:04.141 "reset": true, 00:17:04.141 "nvme_admin": false, 00:17:04.141 "nvme_io": false, 00:17:04.141 "nvme_io_md": false, 00:17:04.141 "write_zeroes": true, 00:17:04.141 "zcopy": false, 00:17:04.141 "get_zone_info": false, 00:17:04.142 "zone_management": false, 00:17:04.142 "zone_append": false, 00:17:04.142 "compare": false, 00:17:04.142 "compare_and_write": false, 00:17:04.142 "abort": false, 00:17:04.142 "seek_hole": false, 00:17:04.142 "seek_data": false, 00:17:04.142 "copy": false, 00:17:04.142 "nvme_iov_md": false 00:17:04.142 }, 00:17:04.142 "driver_specific": { 00:17:04.142 "raid": { 00:17:04.142 "uuid": "872e8c03-e614-4b70-a4dc-3a3d22024f01", 00:17:04.142 "strip_size_kb": 64, 00:17:04.142 "state": "online", 00:17:04.142 "raid_level": "raid5f", 00:17:04.142 "superblock": true, 00:17:04.142 "num_base_bdevs": 4, 00:17:04.142 "num_base_bdevs_discovered": 4, 00:17:04.142 "num_base_bdevs_operational": 4, 00:17:04.142 "base_bdevs_list": [ 00:17:04.142 { 00:17:04.142 "name": "NewBaseBdev", 00:17:04.142 "uuid": "3269c04a-215b-468f-952b-ca487ae53ade", 00:17:04.142 "is_configured": true, 00:17:04.142 "data_offset": 2048, 00:17:04.142 "data_size": 63488 00:17:04.142 }, 00:17:04.142 { 00:17:04.142 "name": "BaseBdev2", 00:17:04.142 "uuid": "dd1938fd-0293-4d02-aa3d-d0ede70c7724", 00:17:04.142 "is_configured": true, 00:17:04.142 "data_offset": 2048, 00:17:04.142 "data_size": 63488 00:17:04.142 }, 00:17:04.142 { 00:17:04.142 "name": "BaseBdev3", 00:17:04.142 "uuid": "e9cfbd8d-0c20-443c-a893-8b8907d3305e", 00:17:04.142 "is_configured": true, 00:17:04.142 "data_offset": 2048, 00:17:04.142 "data_size": 63488 00:17:04.142 }, 00:17:04.142 { 00:17:04.142 "name": "BaseBdev4", 00:17:04.142 "uuid": "dbea3945-4981-4c00-89b1-e045172854e3", 00:17:04.142 "is_configured": true, 00:17:04.142 "data_offset": 2048, 00:17:04.142 "data_size": 63488 00:17:04.142 } 00:17:04.142 ] 00:17:04.142 } 00:17:04.142 } 00:17:04.142 }' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:04.142 BaseBdev2 00:17:04.142 BaseBdev3 00:17:04.142 BaseBdev4' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.142 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.401 [2024-12-06 18:14:16.398777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.401 [2024-12-06 18:14:16.398809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.401 [2024-12-06 18:14:16.398892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.401 [2024-12-06 18:14:16.399226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.401 [2024-12-06 18:14:16.399240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84015 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84015 ']' 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84015 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84015 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84015' 00:17:04.401 killing process with pid 84015 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84015 00:17:04.401 [2024-12-06 18:14:16.444392] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.401 18:14:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84015 00:17:04.990 [2024-12-06 18:14:16.878407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.372 ************************************ 00:17:06.372 END TEST raid5f_state_function_test_sb 00:17:06.372 ************************************ 00:17:06.372 18:14:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:06.372 00:17:06.372 real 0m12.317s 00:17:06.372 user 0m19.425s 00:17:06.372 sys 0m2.146s 00:17:06.372 18:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.372 18:14:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.372 18:14:18 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:06.372 18:14:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:06.372 18:14:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.372 18:14:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.372 ************************************ 00:17:06.372 START TEST raid5f_superblock_test 00:17:06.372 ************************************ 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84690 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:06.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84690 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84690 ']' 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.372 18:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.372 [2024-12-06 18:14:18.343275] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:17:06.372 [2024-12-06 18:14:18.343477] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84690 ] 00:17:06.372 [2024-12-06 18:14:18.516307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.632 [2024-12-06 18:14:18.650330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.893 [2024-12-06 18:14:18.888471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.893 [2024-12-06 18:14:18.888630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.152 malloc1 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.152 [2024-12-06 18:14:19.257895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.152 [2024-12-06 18:14:19.258027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.152 [2024-12-06 18:14:19.258075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:07.152 [2024-12-06 18:14:19.258127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.152 [2024-12-06 18:14:19.260645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.152 [2024-12-06 18:14:19.260744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.152 pt1 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.152 malloc2 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.152 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.413 [2024-12-06 18:14:19.318773] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.413 [2024-12-06 18:14:19.318844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.413 [2024-12-06 18:14:19.318876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:07.413 [2024-12-06 18:14:19.318887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.413 [2024-12-06 18:14:19.321418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.413 [2024-12-06 18:14:19.321462] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.413 pt2 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.413 malloc3 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.413 [2024-12-06 18:14:19.388259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:07.413 [2024-12-06 18:14:19.388372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.413 [2024-12-06 18:14:19.388421] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:07.413 [2024-12-06 18:14:19.388473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.413 [2024-12-06 18:14:19.390869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.413 [2024-12-06 18:14:19.390951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:07.413 pt3 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.413 malloc4 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.413 [2024-12-06 18:14:19.446752] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:07.413 [2024-12-06 18:14:19.446813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.413 [2024-12-06 18:14:19.446834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:07.413 [2024-12-06 18:14:19.446843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.413 [2024-12-06 18:14:19.448928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.413 [2024-12-06 18:14:19.448966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:07.413 pt4 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.413 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.413 [2024-12-06 18:14:19.458767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:07.413 [2024-12-06 18:14:19.460592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.414 [2024-12-06 18:14:19.460750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:07.414 [2024-12-06 18:14:19.460814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:07.414 [2024-12-06 18:14:19.461044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:07.414 [2024-12-06 18:14:19.461060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:07.414 [2024-12-06 18:14:19.461335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:07.414 [2024-12-06 18:14:19.468626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:07.414 [2024-12-06 18:14:19.468649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:07.414 [2024-12-06 18:14:19.468828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.414 "name": "raid_bdev1", 00:17:07.414 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:07.414 "strip_size_kb": 64, 00:17:07.414 "state": "online", 00:17:07.414 "raid_level": "raid5f", 00:17:07.414 "superblock": true, 00:17:07.414 "num_base_bdevs": 4, 00:17:07.414 "num_base_bdevs_discovered": 4, 00:17:07.414 "num_base_bdevs_operational": 4, 00:17:07.414 "base_bdevs_list": [ 00:17:07.414 { 00:17:07.414 "name": "pt1", 00:17:07.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.414 "is_configured": true, 00:17:07.414 "data_offset": 2048, 00:17:07.414 "data_size": 63488 00:17:07.414 }, 00:17:07.414 { 00:17:07.414 "name": "pt2", 00:17:07.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.414 "is_configured": true, 00:17:07.414 "data_offset": 2048, 00:17:07.414 "data_size": 63488 00:17:07.414 }, 00:17:07.414 { 00:17:07.414 "name": "pt3", 00:17:07.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.414 "is_configured": true, 00:17:07.414 "data_offset": 2048, 00:17:07.414 "data_size": 63488 00:17:07.414 }, 00:17:07.414 { 00:17:07.414 "name": "pt4", 00:17:07.414 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.414 "is_configured": true, 00:17:07.414 "data_offset": 2048, 00:17:07.414 "data_size": 63488 00:17:07.414 } 00:17:07.414 ] 00:17:07.414 }' 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.414 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.982 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.983 [2024-12-06 18:14:19.940714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.983 "name": "raid_bdev1", 00:17:07.983 "aliases": [ 00:17:07.983 "95ad9398-70ef-43df-9f1b-a5a165c713d8" 00:17:07.983 ], 00:17:07.983 "product_name": "Raid Volume", 00:17:07.983 "block_size": 512, 00:17:07.983 "num_blocks": 190464, 00:17:07.983 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:07.983 "assigned_rate_limits": { 00:17:07.983 "rw_ios_per_sec": 0, 00:17:07.983 "rw_mbytes_per_sec": 0, 00:17:07.983 "r_mbytes_per_sec": 0, 00:17:07.983 "w_mbytes_per_sec": 0 00:17:07.983 }, 00:17:07.983 "claimed": false, 00:17:07.983 "zoned": false, 00:17:07.983 "supported_io_types": { 00:17:07.983 "read": true, 00:17:07.983 "write": true, 00:17:07.983 "unmap": false, 00:17:07.983 "flush": false, 00:17:07.983 "reset": true, 00:17:07.983 "nvme_admin": false, 00:17:07.983 "nvme_io": false, 00:17:07.983 "nvme_io_md": false, 00:17:07.983 "write_zeroes": true, 00:17:07.983 "zcopy": false, 00:17:07.983 "get_zone_info": false, 00:17:07.983 "zone_management": false, 00:17:07.983 "zone_append": false, 00:17:07.983 "compare": false, 00:17:07.983 "compare_and_write": false, 00:17:07.983 "abort": false, 00:17:07.983 "seek_hole": false, 00:17:07.983 "seek_data": false, 00:17:07.983 "copy": false, 00:17:07.983 "nvme_iov_md": false 00:17:07.983 }, 00:17:07.983 "driver_specific": { 00:17:07.983 "raid": { 00:17:07.983 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:07.983 "strip_size_kb": 64, 00:17:07.983 "state": "online", 00:17:07.983 "raid_level": "raid5f", 00:17:07.983 "superblock": true, 00:17:07.983 "num_base_bdevs": 4, 00:17:07.983 "num_base_bdevs_discovered": 4, 00:17:07.983 "num_base_bdevs_operational": 4, 00:17:07.983 "base_bdevs_list": [ 00:17:07.983 { 00:17:07.983 "name": "pt1", 00:17:07.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.983 "is_configured": true, 00:17:07.983 "data_offset": 2048, 00:17:07.983 "data_size": 63488 00:17:07.983 }, 00:17:07.983 { 00:17:07.983 "name": "pt2", 00:17:07.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.983 "is_configured": true, 00:17:07.983 "data_offset": 2048, 00:17:07.983 "data_size": 63488 00:17:07.983 }, 00:17:07.983 { 00:17:07.983 "name": "pt3", 00:17:07.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.983 "is_configured": true, 00:17:07.983 "data_offset": 2048, 00:17:07.983 "data_size": 63488 00:17:07.983 }, 00:17:07.983 { 00:17:07.983 "name": "pt4", 00:17:07.983 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.983 "is_configured": true, 00:17:07.983 "data_offset": 2048, 00:17:07.983 "data_size": 63488 00:17:07.983 } 00:17:07.983 ] 00:17:07.983 } 00:17:07.983 } 00:17:07.983 }' 00:17:07.983 18:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:07.983 pt2 00:17:07.983 pt3 00:17:07.983 pt4' 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.983 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.264 [2024-12-06 18:14:20.256218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=95ad9398-70ef-43df-9f1b-a5a165c713d8 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 95ad9398-70ef-43df-9f1b-a5a165c713d8 ']' 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.264 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.264 [2024-12-06 18:14:20.299898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.265 [2024-12-06 18:14:20.299931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.265 [2024-12-06 18:14:20.300047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.265 [2024-12-06 18:14:20.300162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.265 [2024-12-06 18:14:20.300216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.265 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.524 [2024-12-06 18:14:20.459693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:08.524 [2024-12-06 18:14:20.461843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:08.524 [2024-12-06 18:14:20.461986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:08.524 [2024-12-06 18:14:20.462040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:08.524 [2024-12-06 18:14:20.462125] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:08.524 [2024-12-06 18:14:20.462191] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:08.524 [2024-12-06 18:14:20.462218] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:08.524 [2024-12-06 18:14:20.462245] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:08.524 [2024-12-06 18:14:20.462263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.524 [2024-12-06 18:14:20.462279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:08.524 request: 00:17:08.524 { 00:17:08.524 "name": "raid_bdev1", 00:17:08.524 "raid_level": "raid5f", 00:17:08.524 "base_bdevs": [ 00:17:08.524 "malloc1", 00:17:08.524 "malloc2", 00:17:08.524 "malloc3", 00:17:08.524 "malloc4" 00:17:08.524 ], 00:17:08.524 "strip_size_kb": 64, 00:17:08.524 "superblock": false, 00:17:08.524 "method": "bdev_raid_create", 00:17:08.524 "req_id": 1 00:17:08.524 } 00:17:08.524 Got JSON-RPC error response 00:17:08.524 response: 00:17:08.524 { 00:17:08.524 "code": -17, 00:17:08.524 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:08.524 } 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.524 [2024-12-06 18:14:20.519514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:08.524 [2024-12-06 18:14:20.519649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.524 [2024-12-06 18:14:20.519691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:08.524 [2024-12-06 18:14:20.519732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.524 [2024-12-06 18:14:20.522194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.524 [2024-12-06 18:14:20.522276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:08.524 [2024-12-06 18:14:20.522401] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:08.524 [2024-12-06 18:14:20.522504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.524 pt1 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.524 "name": "raid_bdev1", 00:17:08.524 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:08.524 "strip_size_kb": 64, 00:17:08.524 "state": "configuring", 00:17:08.524 "raid_level": "raid5f", 00:17:08.524 "superblock": true, 00:17:08.524 "num_base_bdevs": 4, 00:17:08.524 "num_base_bdevs_discovered": 1, 00:17:08.524 "num_base_bdevs_operational": 4, 00:17:08.524 "base_bdevs_list": [ 00:17:08.524 { 00:17:08.524 "name": "pt1", 00:17:08.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.524 "is_configured": true, 00:17:08.524 "data_offset": 2048, 00:17:08.524 "data_size": 63488 00:17:08.524 }, 00:17:08.524 { 00:17:08.524 "name": null, 00:17:08.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.524 "is_configured": false, 00:17:08.524 "data_offset": 2048, 00:17:08.524 "data_size": 63488 00:17:08.524 }, 00:17:08.524 { 00:17:08.524 "name": null, 00:17:08.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.524 "is_configured": false, 00:17:08.524 "data_offset": 2048, 00:17:08.524 "data_size": 63488 00:17:08.524 }, 00:17:08.524 { 00:17:08.524 "name": null, 00:17:08.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.524 "is_configured": false, 00:17:08.524 "data_offset": 2048, 00:17:08.524 "data_size": 63488 00:17:08.524 } 00:17:08.524 ] 00:17:08.524 }' 00:17:08.524 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.525 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:09.092 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.092 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.092 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 [2024-12-06 18:14:20.994747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.092 [2024-12-06 18:14:20.994859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.092 [2024-12-06 18:14:20.994882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:09.092 [2024-12-06 18:14:20.994896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.092 [2024-12-06 18:14:20.995453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.092 [2024-12-06 18:14:20.995549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.092 [2024-12-06 18:14:20.995672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:09.092 [2024-12-06 18:14:20.995704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.092 pt2 00:17:09.092 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.092 18:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:09.092 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.092 18:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 [2024-12-06 18:14:21.002768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.092 "name": "raid_bdev1", 00:17:09.092 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:09.092 "strip_size_kb": 64, 00:17:09.092 "state": "configuring", 00:17:09.092 "raid_level": "raid5f", 00:17:09.092 "superblock": true, 00:17:09.092 "num_base_bdevs": 4, 00:17:09.092 "num_base_bdevs_discovered": 1, 00:17:09.092 "num_base_bdevs_operational": 4, 00:17:09.092 "base_bdevs_list": [ 00:17:09.092 { 00:17:09.092 "name": "pt1", 00:17:09.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:09.092 "is_configured": true, 00:17:09.092 "data_offset": 2048, 00:17:09.092 "data_size": 63488 00:17:09.092 }, 00:17:09.092 { 00:17:09.092 "name": null, 00:17:09.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.092 "is_configured": false, 00:17:09.092 "data_offset": 0, 00:17:09.092 "data_size": 63488 00:17:09.092 }, 00:17:09.092 { 00:17:09.092 "name": null, 00:17:09.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.092 "is_configured": false, 00:17:09.092 "data_offset": 2048, 00:17:09.092 "data_size": 63488 00:17:09.092 }, 00:17:09.092 { 00:17:09.092 "name": null, 00:17:09.092 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:09.092 "is_configured": false, 00:17:09.092 "data_offset": 2048, 00:17:09.092 "data_size": 63488 00:17:09.092 } 00:17:09.092 ] 00:17:09.092 }' 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.092 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.350 [2024-12-06 18:14:21.438048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.350 [2024-12-06 18:14:21.438229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.350 [2024-12-06 18:14:21.438261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:09.350 [2024-12-06 18:14:21.438273] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.350 [2024-12-06 18:14:21.438815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.350 [2024-12-06 18:14:21.438846] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.350 [2024-12-06 18:14:21.438952] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:09.350 [2024-12-06 18:14:21.438978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.350 pt2 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.350 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.350 [2024-12-06 18:14:21.446018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:09.350 [2024-12-06 18:14:21.446164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.350 [2024-12-06 18:14:21.446206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:09.351 [2024-12-06 18:14:21.446220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.351 [2024-12-06 18:14:21.446747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.351 [2024-12-06 18:14:21.446777] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:09.351 [2024-12-06 18:14:21.446870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:09.351 [2024-12-06 18:14:21.446902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:09.351 pt3 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.351 [2024-12-06 18:14:21.453980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:09.351 [2024-12-06 18:14:21.454135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.351 [2024-12-06 18:14:21.454171] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:09.351 [2024-12-06 18:14:21.454183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.351 [2024-12-06 18:14:21.454725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.351 [2024-12-06 18:14:21.454760] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:09.351 [2024-12-06 18:14:21.454858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:09.351 [2024-12-06 18:14:21.454888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:09.351 [2024-12-06 18:14:21.455059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:09.351 [2024-12-06 18:14:21.455096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:09.351 [2024-12-06 18:14:21.455382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:09.351 [2024-12-06 18:14:21.464113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:09.351 [2024-12-06 18:14:21.464151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:09.351 [2024-12-06 18:14:21.464431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.351 pt4 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.351 "name": "raid_bdev1", 00:17:09.351 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:09.351 "strip_size_kb": 64, 00:17:09.351 "state": "online", 00:17:09.351 "raid_level": "raid5f", 00:17:09.351 "superblock": true, 00:17:09.351 "num_base_bdevs": 4, 00:17:09.351 "num_base_bdevs_discovered": 4, 00:17:09.351 "num_base_bdevs_operational": 4, 00:17:09.351 "base_bdevs_list": [ 00:17:09.351 { 00:17:09.351 "name": "pt1", 00:17:09.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:09.351 "is_configured": true, 00:17:09.351 "data_offset": 2048, 00:17:09.351 "data_size": 63488 00:17:09.351 }, 00:17:09.351 { 00:17:09.351 "name": "pt2", 00:17:09.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.351 "is_configured": true, 00:17:09.351 "data_offset": 2048, 00:17:09.351 "data_size": 63488 00:17:09.351 }, 00:17:09.351 { 00:17:09.351 "name": "pt3", 00:17:09.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.351 "is_configured": true, 00:17:09.351 "data_offset": 2048, 00:17:09.351 "data_size": 63488 00:17:09.351 }, 00:17:09.351 { 00:17:09.351 "name": "pt4", 00:17:09.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:09.351 "is_configured": true, 00:17:09.351 "data_offset": 2048, 00:17:09.351 "data_size": 63488 00:17:09.351 } 00:17:09.351 ] 00:17:09.351 }' 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.351 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:09.953 [2024-12-06 18:14:21.941833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:09.953 "name": "raid_bdev1", 00:17:09.953 "aliases": [ 00:17:09.953 "95ad9398-70ef-43df-9f1b-a5a165c713d8" 00:17:09.953 ], 00:17:09.953 "product_name": "Raid Volume", 00:17:09.953 "block_size": 512, 00:17:09.953 "num_blocks": 190464, 00:17:09.953 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:09.953 "assigned_rate_limits": { 00:17:09.953 "rw_ios_per_sec": 0, 00:17:09.953 "rw_mbytes_per_sec": 0, 00:17:09.953 "r_mbytes_per_sec": 0, 00:17:09.953 "w_mbytes_per_sec": 0 00:17:09.953 }, 00:17:09.953 "claimed": false, 00:17:09.953 "zoned": false, 00:17:09.953 "supported_io_types": { 00:17:09.953 "read": true, 00:17:09.953 "write": true, 00:17:09.953 "unmap": false, 00:17:09.953 "flush": false, 00:17:09.953 "reset": true, 00:17:09.953 "nvme_admin": false, 00:17:09.953 "nvme_io": false, 00:17:09.953 "nvme_io_md": false, 00:17:09.953 "write_zeroes": true, 00:17:09.953 "zcopy": false, 00:17:09.953 "get_zone_info": false, 00:17:09.953 "zone_management": false, 00:17:09.953 "zone_append": false, 00:17:09.953 "compare": false, 00:17:09.953 "compare_and_write": false, 00:17:09.953 "abort": false, 00:17:09.953 "seek_hole": false, 00:17:09.953 "seek_data": false, 00:17:09.953 "copy": false, 00:17:09.953 "nvme_iov_md": false 00:17:09.953 }, 00:17:09.953 "driver_specific": { 00:17:09.953 "raid": { 00:17:09.953 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:09.953 "strip_size_kb": 64, 00:17:09.953 "state": "online", 00:17:09.953 "raid_level": "raid5f", 00:17:09.953 "superblock": true, 00:17:09.953 "num_base_bdevs": 4, 00:17:09.953 "num_base_bdevs_discovered": 4, 00:17:09.953 "num_base_bdevs_operational": 4, 00:17:09.953 "base_bdevs_list": [ 00:17:09.953 { 00:17:09.953 "name": "pt1", 00:17:09.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:09.953 "is_configured": true, 00:17:09.953 "data_offset": 2048, 00:17:09.953 "data_size": 63488 00:17:09.953 }, 00:17:09.953 { 00:17:09.953 "name": "pt2", 00:17:09.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.953 "is_configured": true, 00:17:09.953 "data_offset": 2048, 00:17:09.953 "data_size": 63488 00:17:09.953 }, 00:17:09.953 { 00:17:09.953 "name": "pt3", 00:17:09.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.953 "is_configured": true, 00:17:09.953 "data_offset": 2048, 00:17:09.953 "data_size": 63488 00:17:09.953 }, 00:17:09.953 { 00:17:09.953 "name": "pt4", 00:17:09.953 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:09.953 "is_configured": true, 00:17:09.953 "data_offset": 2048, 00:17:09.953 "data_size": 63488 00:17:09.953 } 00:17:09.953 ] 00:17:09.953 } 00:17:09.953 } 00:17:09.953 }' 00:17:09.953 18:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.953 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:09.953 pt2 00:17:09.953 pt3 00:17:09.953 pt4' 00:17:09.953 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.953 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:09.954 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.954 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:09.954 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.954 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.954 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.954 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.215 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:10.216 [2024-12-06 18:14:22.293226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 95ad9398-70ef-43df-9f1b-a5a165c713d8 '!=' 95ad9398-70ef-43df-9f1b-a5a165c713d8 ']' 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.216 [2024-12-06 18:14:22.337000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.216 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.475 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.475 "name": "raid_bdev1", 00:17:10.475 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:10.475 "strip_size_kb": 64, 00:17:10.475 "state": "online", 00:17:10.475 "raid_level": "raid5f", 00:17:10.475 "superblock": true, 00:17:10.475 "num_base_bdevs": 4, 00:17:10.475 "num_base_bdevs_discovered": 3, 00:17:10.475 "num_base_bdevs_operational": 3, 00:17:10.475 "base_bdevs_list": [ 00:17:10.475 { 00:17:10.475 "name": null, 00:17:10.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.475 "is_configured": false, 00:17:10.475 "data_offset": 0, 00:17:10.475 "data_size": 63488 00:17:10.475 }, 00:17:10.475 { 00:17:10.475 "name": "pt2", 00:17:10.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.475 "is_configured": true, 00:17:10.475 "data_offset": 2048, 00:17:10.475 "data_size": 63488 00:17:10.475 }, 00:17:10.475 { 00:17:10.475 "name": "pt3", 00:17:10.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.475 "is_configured": true, 00:17:10.475 "data_offset": 2048, 00:17:10.475 "data_size": 63488 00:17:10.475 }, 00:17:10.475 { 00:17:10.475 "name": "pt4", 00:17:10.475 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:10.475 "is_configured": true, 00:17:10.475 "data_offset": 2048, 00:17:10.475 "data_size": 63488 00:17:10.475 } 00:17:10.475 ] 00:17:10.475 }' 00:17:10.475 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.475 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.734 [2024-12-06 18:14:22.788174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.734 [2024-12-06 18:14:22.788253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.734 [2024-12-06 18:14:22.788355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.734 [2024-12-06 18:14:22.788448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.734 [2024-12-06 18:14:22.788459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.734 [2024-12-06 18:14:22.875997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.734 [2024-12-06 18:14:22.876120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.734 [2024-12-06 18:14:22.876147] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:10.734 [2024-12-06 18:14:22.876157] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.734 [2024-12-06 18:14:22.878589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.734 [2024-12-06 18:14:22.878630] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.734 [2024-12-06 18:14:22.878720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:10.734 [2024-12-06 18:14:22.878775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.734 pt2 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.734 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.992 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.992 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.992 "name": "raid_bdev1", 00:17:10.992 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:10.992 "strip_size_kb": 64, 00:17:10.992 "state": "configuring", 00:17:10.992 "raid_level": "raid5f", 00:17:10.992 "superblock": true, 00:17:10.992 "num_base_bdevs": 4, 00:17:10.992 "num_base_bdevs_discovered": 1, 00:17:10.992 "num_base_bdevs_operational": 3, 00:17:10.992 "base_bdevs_list": [ 00:17:10.992 { 00:17:10.992 "name": null, 00:17:10.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.992 "is_configured": false, 00:17:10.992 "data_offset": 2048, 00:17:10.992 "data_size": 63488 00:17:10.992 }, 00:17:10.992 { 00:17:10.992 "name": "pt2", 00:17:10.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.992 "is_configured": true, 00:17:10.992 "data_offset": 2048, 00:17:10.992 "data_size": 63488 00:17:10.992 }, 00:17:10.992 { 00:17:10.992 "name": null, 00:17:10.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.992 "is_configured": false, 00:17:10.992 "data_offset": 2048, 00:17:10.992 "data_size": 63488 00:17:10.992 }, 00:17:10.992 { 00:17:10.992 "name": null, 00:17:10.992 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:10.992 "is_configured": false, 00:17:10.992 "data_offset": 2048, 00:17:10.992 "data_size": 63488 00:17:10.992 } 00:17:10.992 ] 00:17:10.992 }' 00:17:10.992 18:14:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.992 18:14:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.251 [2024-12-06 18:14:23.311543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:11.251 [2024-12-06 18:14:23.311718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.251 [2024-12-06 18:14:23.311838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:11.251 [2024-12-06 18:14:23.311891] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.251 [2024-12-06 18:14:23.312452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.251 [2024-12-06 18:14:23.312477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:11.251 [2024-12-06 18:14:23.312577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:11.251 [2024-12-06 18:14:23.312603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:11.251 pt3 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.251 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.251 "name": "raid_bdev1", 00:17:11.251 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:11.251 "strip_size_kb": 64, 00:17:11.251 "state": "configuring", 00:17:11.251 "raid_level": "raid5f", 00:17:11.251 "superblock": true, 00:17:11.251 "num_base_bdevs": 4, 00:17:11.251 "num_base_bdevs_discovered": 2, 00:17:11.251 "num_base_bdevs_operational": 3, 00:17:11.251 "base_bdevs_list": [ 00:17:11.251 { 00:17:11.251 "name": null, 00:17:11.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.251 "is_configured": false, 00:17:11.251 "data_offset": 2048, 00:17:11.251 "data_size": 63488 00:17:11.251 }, 00:17:11.251 { 00:17:11.251 "name": "pt2", 00:17:11.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.252 "is_configured": true, 00:17:11.252 "data_offset": 2048, 00:17:11.252 "data_size": 63488 00:17:11.252 }, 00:17:11.252 { 00:17:11.252 "name": "pt3", 00:17:11.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.252 "is_configured": true, 00:17:11.252 "data_offset": 2048, 00:17:11.252 "data_size": 63488 00:17:11.252 }, 00:17:11.252 { 00:17:11.252 "name": null, 00:17:11.252 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:11.252 "is_configured": false, 00:17:11.252 "data_offset": 2048, 00:17:11.252 "data_size": 63488 00:17:11.252 } 00:17:11.252 ] 00:17:11.252 }' 00:17:11.252 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.252 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.820 [2024-12-06 18:14:23.770816] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:11.820 [2024-12-06 18:14:23.770888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.820 [2024-12-06 18:14:23.770912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:11.820 [2024-12-06 18:14:23.770921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.820 [2024-12-06 18:14:23.771453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.820 [2024-12-06 18:14:23.771475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:11.820 [2024-12-06 18:14:23.771570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:11.820 [2024-12-06 18:14:23.771620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:11.820 [2024-12-06 18:14:23.771799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:11.820 [2024-12-06 18:14:23.771810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:11.820 [2024-12-06 18:14:23.772113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:11.820 [2024-12-06 18:14:23.780200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:11.820 pt4 00:17:11.820 [2024-12-06 18:14:23.780274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:11.820 [2024-12-06 18:14:23.780633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.820 "name": "raid_bdev1", 00:17:11.820 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:11.820 "strip_size_kb": 64, 00:17:11.820 "state": "online", 00:17:11.820 "raid_level": "raid5f", 00:17:11.820 "superblock": true, 00:17:11.820 "num_base_bdevs": 4, 00:17:11.820 "num_base_bdevs_discovered": 3, 00:17:11.820 "num_base_bdevs_operational": 3, 00:17:11.820 "base_bdevs_list": [ 00:17:11.820 { 00:17:11.820 "name": null, 00:17:11.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.820 "is_configured": false, 00:17:11.820 "data_offset": 2048, 00:17:11.820 "data_size": 63488 00:17:11.820 }, 00:17:11.820 { 00:17:11.820 "name": "pt2", 00:17:11.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.820 "is_configured": true, 00:17:11.820 "data_offset": 2048, 00:17:11.820 "data_size": 63488 00:17:11.820 }, 00:17:11.820 { 00:17:11.820 "name": "pt3", 00:17:11.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.820 "is_configured": true, 00:17:11.820 "data_offset": 2048, 00:17:11.820 "data_size": 63488 00:17:11.820 }, 00:17:11.820 { 00:17:11.820 "name": "pt4", 00:17:11.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:11.820 "is_configured": true, 00:17:11.820 "data_offset": 2048, 00:17:11.820 "data_size": 63488 00:17:11.820 } 00:17:11.820 ] 00:17:11.820 }' 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.820 18:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.389 [2024-12-06 18:14:24.257533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.389 [2024-12-06 18:14:24.257610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.389 [2024-12-06 18:14:24.257747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.389 [2024-12-06 18:14:24.257872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.389 [2024-12-06 18:14:24.257946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.389 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.390 [2024-12-06 18:14:24.325412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:12.390 [2024-12-06 18:14:24.325535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.390 [2024-12-06 18:14:24.325574] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:12.390 [2024-12-06 18:14:24.325592] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.390 [2024-12-06 18:14:24.328295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.390 [2024-12-06 18:14:24.328343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:12.390 [2024-12-06 18:14:24.328467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:12.390 [2024-12-06 18:14:24.328531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:12.390 [2024-12-06 18:14:24.328693] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:12.390 [2024-12-06 18:14:24.328716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.390 [2024-12-06 18:14:24.328735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:12.390 [2024-12-06 18:14:24.328815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.390 [2024-12-06 18:14:24.328951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:12.390 pt1 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.390 "name": "raid_bdev1", 00:17:12.390 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:12.390 "strip_size_kb": 64, 00:17:12.390 "state": "configuring", 00:17:12.390 "raid_level": "raid5f", 00:17:12.390 "superblock": true, 00:17:12.390 "num_base_bdevs": 4, 00:17:12.390 "num_base_bdevs_discovered": 2, 00:17:12.390 "num_base_bdevs_operational": 3, 00:17:12.390 "base_bdevs_list": [ 00:17:12.390 { 00:17:12.390 "name": null, 00:17:12.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.390 "is_configured": false, 00:17:12.390 "data_offset": 2048, 00:17:12.390 "data_size": 63488 00:17:12.390 }, 00:17:12.390 { 00:17:12.390 "name": "pt2", 00:17:12.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.390 "is_configured": true, 00:17:12.390 "data_offset": 2048, 00:17:12.390 "data_size": 63488 00:17:12.390 }, 00:17:12.390 { 00:17:12.390 "name": "pt3", 00:17:12.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.390 "is_configured": true, 00:17:12.390 "data_offset": 2048, 00:17:12.390 "data_size": 63488 00:17:12.390 }, 00:17:12.390 { 00:17:12.390 "name": null, 00:17:12.390 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:12.390 "is_configured": false, 00:17:12.390 "data_offset": 2048, 00:17:12.390 "data_size": 63488 00:17:12.390 } 00:17:12.390 ] 00:17:12.390 }' 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.390 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.651 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:12.651 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.651 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.651 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:12.651 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.910 [2024-12-06 18:14:24.844601] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:12.910 [2024-12-06 18:14:24.844678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.910 [2024-12-06 18:14:24.844707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:12.910 [2024-12-06 18:14:24.844719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.910 [2024-12-06 18:14:24.845322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.910 [2024-12-06 18:14:24.845360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:12.910 [2024-12-06 18:14:24.845461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:12.910 [2024-12-06 18:14:24.845489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:12.910 [2024-12-06 18:14:24.845660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:12.910 [2024-12-06 18:14:24.845676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:12.910 [2024-12-06 18:14:24.845985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:12.910 [2024-12-06 18:14:24.855167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:12.910 [2024-12-06 18:14:24.855241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:12.910 [2024-12-06 18:14:24.855640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.910 pt4 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.910 "name": "raid_bdev1", 00:17:12.910 "uuid": "95ad9398-70ef-43df-9f1b-a5a165c713d8", 00:17:12.910 "strip_size_kb": 64, 00:17:12.910 "state": "online", 00:17:12.910 "raid_level": "raid5f", 00:17:12.910 "superblock": true, 00:17:12.910 "num_base_bdevs": 4, 00:17:12.910 "num_base_bdevs_discovered": 3, 00:17:12.910 "num_base_bdevs_operational": 3, 00:17:12.910 "base_bdevs_list": [ 00:17:12.910 { 00:17:12.910 "name": null, 00:17:12.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.910 "is_configured": false, 00:17:12.910 "data_offset": 2048, 00:17:12.910 "data_size": 63488 00:17:12.910 }, 00:17:12.910 { 00:17:12.910 "name": "pt2", 00:17:12.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.910 "is_configured": true, 00:17:12.910 "data_offset": 2048, 00:17:12.910 "data_size": 63488 00:17:12.910 }, 00:17:12.910 { 00:17:12.910 "name": "pt3", 00:17:12.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.910 "is_configured": true, 00:17:12.910 "data_offset": 2048, 00:17:12.910 "data_size": 63488 00:17:12.910 }, 00:17:12.910 { 00:17:12.910 "name": "pt4", 00:17:12.910 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:12.910 "is_configured": true, 00:17:12.910 "data_offset": 2048, 00:17:12.910 "data_size": 63488 00:17:12.910 } 00:17:12.910 ] 00:17:12.910 }' 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.910 18:14:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.168 18:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:13.168 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.168 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.168 18:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:13.168 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:13.426 [2024-12-06 18:14:25.373342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 95ad9398-70ef-43df-9f1b-a5a165c713d8 '!=' 95ad9398-70ef-43df-9f1b-a5a165c713d8 ']' 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84690 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84690 ']' 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84690 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84690 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84690' 00:17:13.426 killing process with pid 84690 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84690 00:17:13.426 18:14:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84690 00:17:13.426 [2024-12-06 18:14:25.460067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.426 [2024-12-06 18:14:25.460223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.426 [2024-12-06 18:14:25.460333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.426 [2024-12-06 18:14:25.460353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:13.991 [2024-12-06 18:14:25.882447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:14.928 ************************************ 00:17:14.928 END TEST raid5f_superblock_test 00:17:14.928 ************************************ 00:17:14.928 18:14:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:14.928 00:17:14.928 real 0m8.804s 00:17:14.928 user 0m13.889s 00:17:14.928 sys 0m1.517s 00:17:14.928 18:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.928 18:14:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.190 18:14:27 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:15.190 18:14:27 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:15.190 18:14:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:15.190 18:14:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.190 18:14:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.190 ************************************ 00:17:15.190 START TEST raid5f_rebuild_test 00:17:15.190 ************************************ 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.190 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85171 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85171 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85171 ']' 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.191 18:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.191 [2024-12-06 18:14:27.221153] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:17:15.191 [2024-12-06 18:14:27.221355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:15.191 Zero copy mechanism will not be used. 00:17:15.191 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85171 ] 00:17:15.464 [2024-12-06 18:14:27.395454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.464 [2024-12-06 18:14:27.518023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.723 [2024-12-06 18:14:27.721397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:15.723 [2024-12-06 18:14:27.721548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 BaseBdev1_malloc 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 [2024-12-06 18:14:28.115627] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:15.990 [2024-12-06 18:14:28.115692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.990 [2024-12-06 18:14:28.115713] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:15.990 [2024-12-06 18:14:28.115725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.990 [2024-12-06 18:14:28.117983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.990 [2024-12-06 18:14:28.118025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:15.990 BaseBdev1 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.990 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.256 BaseBdev2_malloc 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.256 [2024-12-06 18:14:28.170322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:16.256 [2024-12-06 18:14:28.170384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.256 [2024-12-06 18:14:28.170408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:16.256 [2024-12-06 18:14:28.170420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.256 [2024-12-06 18:14:28.172664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.256 [2024-12-06 18:14:28.172705] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:16.256 BaseBdev2 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.256 BaseBdev3_malloc 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.256 [2024-12-06 18:14:28.238860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:16.256 [2024-12-06 18:14:28.238917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.256 [2024-12-06 18:14:28.238939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:16.256 [2024-12-06 18:14:28.238951] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.256 [2024-12-06 18:14:28.241251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.256 [2024-12-06 18:14:28.241337] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:16.256 BaseBdev3 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.256 BaseBdev4_malloc 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.256 [2024-12-06 18:14:28.296232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:16.256 [2024-12-06 18:14:28.296306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.256 [2024-12-06 18:14:28.296331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:16.256 [2024-12-06 18:14:28.296343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.256 [2024-12-06 18:14:28.298716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.256 [2024-12-06 18:14:28.298761] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:16.256 BaseBdev4 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.256 spare_malloc 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.256 spare_delay 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.256 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.256 [2024-12-06 18:14:28.362973] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.256 [2024-12-06 18:14:28.363031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.256 [2024-12-06 18:14:28.363049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:16.256 [2024-12-06 18:14:28.363060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.256 [2024-12-06 18:14:28.365362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.257 [2024-12-06 18:14:28.365401] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.257 spare 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.257 [2024-12-06 18:14:28.374995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.257 [2024-12-06 18:14:28.377060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.257 [2024-12-06 18:14:28.377189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:16.257 [2024-12-06 18:14:28.377251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:16.257 [2024-12-06 18:14:28.377369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:16.257 [2024-12-06 18:14:28.377384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:16.257 [2024-12-06 18:14:28.377671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:16.257 [2024-12-06 18:14:28.385553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:16.257 [2024-12-06 18:14:28.385573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:16.257 [2024-12-06 18:14:28.385780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.257 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.519 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.519 "name": "raid_bdev1", 00:17:16.519 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:16.519 "strip_size_kb": 64, 00:17:16.519 "state": "online", 00:17:16.519 "raid_level": "raid5f", 00:17:16.519 "superblock": false, 00:17:16.519 "num_base_bdevs": 4, 00:17:16.519 "num_base_bdevs_discovered": 4, 00:17:16.519 "num_base_bdevs_operational": 4, 00:17:16.519 "base_bdevs_list": [ 00:17:16.519 { 00:17:16.519 "name": "BaseBdev1", 00:17:16.519 "uuid": "d76871df-5d13-5bac-a625-b53d50703157", 00:17:16.519 "is_configured": true, 00:17:16.519 "data_offset": 0, 00:17:16.519 "data_size": 65536 00:17:16.519 }, 00:17:16.519 { 00:17:16.519 "name": "BaseBdev2", 00:17:16.519 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:16.519 "is_configured": true, 00:17:16.519 "data_offset": 0, 00:17:16.519 "data_size": 65536 00:17:16.519 }, 00:17:16.519 { 00:17:16.519 "name": "BaseBdev3", 00:17:16.519 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:16.519 "is_configured": true, 00:17:16.519 "data_offset": 0, 00:17:16.519 "data_size": 65536 00:17:16.519 }, 00:17:16.519 { 00:17:16.519 "name": "BaseBdev4", 00:17:16.519 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:16.519 "is_configured": true, 00:17:16.519 "data_offset": 0, 00:17:16.519 "data_size": 65536 00:17:16.519 } 00:17:16.519 ] 00:17:16.519 }' 00:17:16.519 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.519 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:16.778 [2024-12-06 18:14:28.842233] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:16.778 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:16.779 18:14:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:17.039 [2024-12-06 18:14:29.141513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:17.039 /dev/nbd0 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.039 1+0 records in 00:17:17.039 1+0 records out 00:17:17.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212408 s, 19.3 MB/s 00:17:17.039 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:17.298 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:17.867 512+0 records in 00:17:17.867 512+0 records out 00:17:17.867 100663296 bytes (101 MB, 96 MiB) copied, 0.529549 s, 190 MB/s 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:17.867 [2024-12-06 18:14:29.953955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.867 [2024-12-06 18:14:29.993126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.867 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.868 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.868 18:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.868 18:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.868 18:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.868 18:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.868 18:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.868 18:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.868 18:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.868 18:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.868 18:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.126 18:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.126 "name": "raid_bdev1", 00:17:18.126 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:18.126 "strip_size_kb": 64, 00:17:18.126 "state": "online", 00:17:18.126 "raid_level": "raid5f", 00:17:18.126 "superblock": false, 00:17:18.126 "num_base_bdevs": 4, 00:17:18.126 "num_base_bdevs_discovered": 3, 00:17:18.126 "num_base_bdevs_operational": 3, 00:17:18.126 "base_bdevs_list": [ 00:17:18.126 { 00:17:18.127 "name": null, 00:17:18.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.127 "is_configured": false, 00:17:18.127 "data_offset": 0, 00:17:18.127 "data_size": 65536 00:17:18.127 }, 00:17:18.127 { 00:17:18.127 "name": "BaseBdev2", 00:17:18.127 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:18.127 "is_configured": true, 00:17:18.127 "data_offset": 0, 00:17:18.127 "data_size": 65536 00:17:18.127 }, 00:17:18.127 { 00:17:18.127 "name": "BaseBdev3", 00:17:18.127 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:18.127 "is_configured": true, 00:17:18.127 "data_offset": 0, 00:17:18.127 "data_size": 65536 00:17:18.127 }, 00:17:18.127 { 00:17:18.127 "name": "BaseBdev4", 00:17:18.127 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:18.127 "is_configured": true, 00:17:18.127 "data_offset": 0, 00:17:18.127 "data_size": 65536 00:17:18.127 } 00:17:18.127 ] 00:17:18.127 }' 00:17:18.127 18:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.127 18:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.384 18:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:18.384 18:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.384 18:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.384 [2024-12-06 18:14:30.444375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.385 [2024-12-06 18:14:30.463607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:18.385 18:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.385 18:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:18.385 [2024-12-06 18:14:30.475565] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.321 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.321 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.321 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.321 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.321 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.321 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.321 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.321 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.322 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.581 "name": "raid_bdev1", 00:17:19.581 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:19.581 "strip_size_kb": 64, 00:17:19.581 "state": "online", 00:17:19.581 "raid_level": "raid5f", 00:17:19.581 "superblock": false, 00:17:19.581 "num_base_bdevs": 4, 00:17:19.581 "num_base_bdevs_discovered": 4, 00:17:19.581 "num_base_bdevs_operational": 4, 00:17:19.581 "process": { 00:17:19.581 "type": "rebuild", 00:17:19.581 "target": "spare", 00:17:19.581 "progress": { 00:17:19.581 "blocks": 17280, 00:17:19.581 "percent": 8 00:17:19.581 } 00:17:19.581 }, 00:17:19.581 "base_bdevs_list": [ 00:17:19.581 { 00:17:19.581 "name": "spare", 00:17:19.581 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:19.581 "is_configured": true, 00:17:19.581 "data_offset": 0, 00:17:19.581 "data_size": 65536 00:17:19.581 }, 00:17:19.581 { 00:17:19.581 "name": "BaseBdev2", 00:17:19.581 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:19.581 "is_configured": true, 00:17:19.581 "data_offset": 0, 00:17:19.581 "data_size": 65536 00:17:19.581 }, 00:17:19.581 { 00:17:19.581 "name": "BaseBdev3", 00:17:19.581 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:19.581 "is_configured": true, 00:17:19.581 "data_offset": 0, 00:17:19.581 "data_size": 65536 00:17:19.581 }, 00:17:19.581 { 00:17:19.581 "name": "BaseBdev4", 00:17:19.581 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:19.581 "is_configured": true, 00:17:19.581 "data_offset": 0, 00:17:19.581 "data_size": 65536 00:17:19.581 } 00:17:19.581 ] 00:17:19.581 }' 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.581 [2024-12-06 18:14:31.611293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.581 [2024-12-06 18:14:31.685532] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:19.581 [2024-12-06 18:14:31.685647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.581 [2024-12-06 18:14:31.685670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.581 [2024-12-06 18:14:31.685681] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.581 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.841 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.841 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.841 "name": "raid_bdev1", 00:17:19.841 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:19.841 "strip_size_kb": 64, 00:17:19.841 "state": "online", 00:17:19.841 "raid_level": "raid5f", 00:17:19.841 "superblock": false, 00:17:19.841 "num_base_bdevs": 4, 00:17:19.841 "num_base_bdevs_discovered": 3, 00:17:19.841 "num_base_bdevs_operational": 3, 00:17:19.841 "base_bdevs_list": [ 00:17:19.841 { 00:17:19.841 "name": null, 00:17:19.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.841 "is_configured": false, 00:17:19.841 "data_offset": 0, 00:17:19.841 "data_size": 65536 00:17:19.841 }, 00:17:19.841 { 00:17:19.841 "name": "BaseBdev2", 00:17:19.841 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:19.841 "is_configured": true, 00:17:19.841 "data_offset": 0, 00:17:19.841 "data_size": 65536 00:17:19.841 }, 00:17:19.841 { 00:17:19.841 "name": "BaseBdev3", 00:17:19.841 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:19.841 "is_configured": true, 00:17:19.841 "data_offset": 0, 00:17:19.841 "data_size": 65536 00:17:19.841 }, 00:17:19.841 { 00:17:19.841 "name": "BaseBdev4", 00:17:19.841 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:19.841 "is_configured": true, 00:17:19.841 "data_offset": 0, 00:17:19.841 "data_size": 65536 00:17:19.841 } 00:17:19.841 ] 00:17:19.841 }' 00:17:19.841 18:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.841 18:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.100 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.101 "name": "raid_bdev1", 00:17:20.101 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:20.101 "strip_size_kb": 64, 00:17:20.101 "state": "online", 00:17:20.101 "raid_level": "raid5f", 00:17:20.101 "superblock": false, 00:17:20.101 "num_base_bdevs": 4, 00:17:20.101 "num_base_bdevs_discovered": 3, 00:17:20.101 "num_base_bdevs_operational": 3, 00:17:20.101 "base_bdevs_list": [ 00:17:20.101 { 00:17:20.101 "name": null, 00:17:20.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.101 "is_configured": false, 00:17:20.101 "data_offset": 0, 00:17:20.101 "data_size": 65536 00:17:20.101 }, 00:17:20.101 { 00:17:20.101 "name": "BaseBdev2", 00:17:20.101 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:20.101 "is_configured": true, 00:17:20.101 "data_offset": 0, 00:17:20.101 "data_size": 65536 00:17:20.101 }, 00:17:20.101 { 00:17:20.101 "name": "BaseBdev3", 00:17:20.101 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:20.101 "is_configured": true, 00:17:20.101 "data_offset": 0, 00:17:20.101 "data_size": 65536 00:17:20.101 }, 00:17:20.101 { 00:17:20.101 "name": "BaseBdev4", 00:17:20.101 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:20.101 "is_configured": true, 00:17:20.101 "data_offset": 0, 00:17:20.101 "data_size": 65536 00:17:20.101 } 00:17:20.101 ] 00:17:20.101 }' 00:17:20.101 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.101 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.101 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.360 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.360 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:20.360 18:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.360 18:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.360 [2024-12-06 18:14:32.305645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.360 [2024-12-06 18:14:32.323619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:20.360 18:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.360 18:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:20.360 [2024-12-06 18:14:32.333791] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.298 "name": "raid_bdev1", 00:17:21.298 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:21.298 "strip_size_kb": 64, 00:17:21.298 "state": "online", 00:17:21.298 "raid_level": "raid5f", 00:17:21.298 "superblock": false, 00:17:21.298 "num_base_bdevs": 4, 00:17:21.298 "num_base_bdevs_discovered": 4, 00:17:21.298 "num_base_bdevs_operational": 4, 00:17:21.298 "process": { 00:17:21.298 "type": "rebuild", 00:17:21.298 "target": "spare", 00:17:21.298 "progress": { 00:17:21.298 "blocks": 17280, 00:17:21.298 "percent": 8 00:17:21.298 } 00:17:21.298 }, 00:17:21.298 "base_bdevs_list": [ 00:17:21.298 { 00:17:21.298 "name": "spare", 00:17:21.298 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:21.298 "is_configured": true, 00:17:21.298 "data_offset": 0, 00:17:21.298 "data_size": 65536 00:17:21.298 }, 00:17:21.298 { 00:17:21.298 "name": "BaseBdev2", 00:17:21.298 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:21.298 "is_configured": true, 00:17:21.298 "data_offset": 0, 00:17:21.298 "data_size": 65536 00:17:21.298 }, 00:17:21.298 { 00:17:21.298 "name": "BaseBdev3", 00:17:21.298 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:21.298 "is_configured": true, 00:17:21.298 "data_offset": 0, 00:17:21.298 "data_size": 65536 00:17:21.298 }, 00:17:21.298 { 00:17:21.298 "name": "BaseBdev4", 00:17:21.298 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:21.298 "is_configured": true, 00:17:21.298 "data_offset": 0, 00:17:21.298 "data_size": 65536 00:17:21.298 } 00:17:21.298 ] 00:17:21.298 }' 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:21.298 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=647 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.558 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.558 "name": "raid_bdev1", 00:17:21.558 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:21.558 "strip_size_kb": 64, 00:17:21.558 "state": "online", 00:17:21.558 "raid_level": "raid5f", 00:17:21.558 "superblock": false, 00:17:21.558 "num_base_bdevs": 4, 00:17:21.558 "num_base_bdevs_discovered": 4, 00:17:21.558 "num_base_bdevs_operational": 4, 00:17:21.558 "process": { 00:17:21.558 "type": "rebuild", 00:17:21.558 "target": "spare", 00:17:21.558 "progress": { 00:17:21.558 "blocks": 21120, 00:17:21.558 "percent": 10 00:17:21.558 } 00:17:21.558 }, 00:17:21.558 "base_bdevs_list": [ 00:17:21.558 { 00:17:21.558 "name": "spare", 00:17:21.558 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:21.558 "is_configured": true, 00:17:21.558 "data_offset": 0, 00:17:21.558 "data_size": 65536 00:17:21.558 }, 00:17:21.558 { 00:17:21.558 "name": "BaseBdev2", 00:17:21.558 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:21.558 "is_configured": true, 00:17:21.558 "data_offset": 0, 00:17:21.558 "data_size": 65536 00:17:21.558 }, 00:17:21.558 { 00:17:21.558 "name": "BaseBdev3", 00:17:21.558 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:21.558 "is_configured": true, 00:17:21.558 "data_offset": 0, 00:17:21.558 "data_size": 65536 00:17:21.558 }, 00:17:21.558 { 00:17:21.558 "name": "BaseBdev4", 00:17:21.559 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:21.559 "is_configured": true, 00:17:21.559 "data_offset": 0, 00:17:21.559 "data_size": 65536 00:17:21.559 } 00:17:21.559 ] 00:17:21.559 }' 00:17:21.559 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.559 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.559 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.559 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.559 18:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.495 18:14:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.755 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.755 "name": "raid_bdev1", 00:17:22.755 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:22.755 "strip_size_kb": 64, 00:17:22.755 "state": "online", 00:17:22.755 "raid_level": "raid5f", 00:17:22.755 "superblock": false, 00:17:22.755 "num_base_bdevs": 4, 00:17:22.755 "num_base_bdevs_discovered": 4, 00:17:22.755 "num_base_bdevs_operational": 4, 00:17:22.755 "process": { 00:17:22.755 "type": "rebuild", 00:17:22.755 "target": "spare", 00:17:22.755 "progress": { 00:17:22.755 "blocks": 42240, 00:17:22.755 "percent": 21 00:17:22.755 } 00:17:22.755 }, 00:17:22.755 "base_bdevs_list": [ 00:17:22.755 { 00:17:22.755 "name": "spare", 00:17:22.755 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:22.755 "is_configured": true, 00:17:22.755 "data_offset": 0, 00:17:22.755 "data_size": 65536 00:17:22.755 }, 00:17:22.755 { 00:17:22.755 "name": "BaseBdev2", 00:17:22.755 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:22.755 "is_configured": true, 00:17:22.755 "data_offset": 0, 00:17:22.755 "data_size": 65536 00:17:22.755 }, 00:17:22.755 { 00:17:22.755 "name": "BaseBdev3", 00:17:22.755 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:22.755 "is_configured": true, 00:17:22.755 "data_offset": 0, 00:17:22.755 "data_size": 65536 00:17:22.755 }, 00:17:22.755 { 00:17:22.755 "name": "BaseBdev4", 00:17:22.755 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:22.755 "is_configured": true, 00:17:22.755 "data_offset": 0, 00:17:22.755 "data_size": 65536 00:17:22.755 } 00:17:22.755 ] 00:17:22.755 }' 00:17:22.755 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.755 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.755 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.755 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.755 18:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.716 "name": "raid_bdev1", 00:17:23.716 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:23.716 "strip_size_kb": 64, 00:17:23.716 "state": "online", 00:17:23.716 "raid_level": "raid5f", 00:17:23.716 "superblock": false, 00:17:23.716 "num_base_bdevs": 4, 00:17:23.716 "num_base_bdevs_discovered": 4, 00:17:23.716 "num_base_bdevs_operational": 4, 00:17:23.716 "process": { 00:17:23.716 "type": "rebuild", 00:17:23.716 "target": "spare", 00:17:23.716 "progress": { 00:17:23.716 "blocks": 65280, 00:17:23.716 "percent": 33 00:17:23.716 } 00:17:23.716 }, 00:17:23.716 "base_bdevs_list": [ 00:17:23.716 { 00:17:23.716 "name": "spare", 00:17:23.716 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:23.716 "is_configured": true, 00:17:23.716 "data_offset": 0, 00:17:23.716 "data_size": 65536 00:17:23.716 }, 00:17:23.716 { 00:17:23.716 "name": "BaseBdev2", 00:17:23.716 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:23.716 "is_configured": true, 00:17:23.716 "data_offset": 0, 00:17:23.716 "data_size": 65536 00:17:23.716 }, 00:17:23.716 { 00:17:23.716 "name": "BaseBdev3", 00:17:23.716 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:23.716 "is_configured": true, 00:17:23.716 "data_offset": 0, 00:17:23.716 "data_size": 65536 00:17:23.716 }, 00:17:23.716 { 00:17:23.716 "name": "BaseBdev4", 00:17:23.716 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:23.716 "is_configured": true, 00:17:23.716 "data_offset": 0, 00:17:23.716 "data_size": 65536 00:17:23.716 } 00:17:23.716 ] 00:17:23.716 }' 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.716 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.976 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.976 18:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.916 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.916 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.916 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.917 "name": "raid_bdev1", 00:17:24.917 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:24.917 "strip_size_kb": 64, 00:17:24.917 "state": "online", 00:17:24.917 "raid_level": "raid5f", 00:17:24.917 "superblock": false, 00:17:24.917 "num_base_bdevs": 4, 00:17:24.917 "num_base_bdevs_discovered": 4, 00:17:24.917 "num_base_bdevs_operational": 4, 00:17:24.917 "process": { 00:17:24.917 "type": "rebuild", 00:17:24.917 "target": "spare", 00:17:24.917 "progress": { 00:17:24.917 "blocks": 86400, 00:17:24.917 "percent": 43 00:17:24.917 } 00:17:24.917 }, 00:17:24.917 "base_bdevs_list": [ 00:17:24.917 { 00:17:24.917 "name": "spare", 00:17:24.917 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:24.917 "is_configured": true, 00:17:24.917 "data_offset": 0, 00:17:24.917 "data_size": 65536 00:17:24.917 }, 00:17:24.917 { 00:17:24.917 "name": "BaseBdev2", 00:17:24.917 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:24.917 "is_configured": true, 00:17:24.917 "data_offset": 0, 00:17:24.917 "data_size": 65536 00:17:24.917 }, 00:17:24.917 { 00:17:24.917 "name": "BaseBdev3", 00:17:24.917 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:24.917 "is_configured": true, 00:17:24.917 "data_offset": 0, 00:17:24.917 "data_size": 65536 00:17:24.917 }, 00:17:24.917 { 00:17:24.917 "name": "BaseBdev4", 00:17:24.917 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:24.917 "is_configured": true, 00:17:24.917 "data_offset": 0, 00:17:24.917 "data_size": 65536 00:17:24.917 } 00:17:24.917 ] 00:17:24.917 }' 00:17:24.917 18:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.917 18:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.917 18:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.917 18:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.917 18:14:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.295 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.295 "name": "raid_bdev1", 00:17:26.295 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:26.295 "strip_size_kb": 64, 00:17:26.295 "state": "online", 00:17:26.295 "raid_level": "raid5f", 00:17:26.295 "superblock": false, 00:17:26.295 "num_base_bdevs": 4, 00:17:26.295 "num_base_bdevs_discovered": 4, 00:17:26.295 "num_base_bdevs_operational": 4, 00:17:26.295 "process": { 00:17:26.295 "type": "rebuild", 00:17:26.295 "target": "spare", 00:17:26.295 "progress": { 00:17:26.295 "blocks": 109440, 00:17:26.295 "percent": 55 00:17:26.295 } 00:17:26.295 }, 00:17:26.295 "base_bdevs_list": [ 00:17:26.295 { 00:17:26.295 "name": "spare", 00:17:26.295 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:26.295 "is_configured": true, 00:17:26.296 "data_offset": 0, 00:17:26.296 "data_size": 65536 00:17:26.296 }, 00:17:26.296 { 00:17:26.296 "name": "BaseBdev2", 00:17:26.296 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:26.296 "is_configured": true, 00:17:26.296 "data_offset": 0, 00:17:26.296 "data_size": 65536 00:17:26.296 }, 00:17:26.296 { 00:17:26.296 "name": "BaseBdev3", 00:17:26.296 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:26.296 "is_configured": true, 00:17:26.296 "data_offset": 0, 00:17:26.296 "data_size": 65536 00:17:26.296 }, 00:17:26.296 { 00:17:26.296 "name": "BaseBdev4", 00:17:26.296 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:26.296 "is_configured": true, 00:17:26.296 "data_offset": 0, 00:17:26.296 "data_size": 65536 00:17:26.296 } 00:17:26.296 ] 00:17:26.296 }' 00:17:26.296 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.296 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.296 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.296 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.296 18:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.234 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.234 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.234 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.234 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.234 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.234 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.234 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.234 18:14:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.235 18:14:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.235 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.235 18:14:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.235 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.235 "name": "raid_bdev1", 00:17:27.235 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:27.235 "strip_size_kb": 64, 00:17:27.235 "state": "online", 00:17:27.235 "raid_level": "raid5f", 00:17:27.235 "superblock": false, 00:17:27.235 "num_base_bdevs": 4, 00:17:27.235 "num_base_bdevs_discovered": 4, 00:17:27.235 "num_base_bdevs_operational": 4, 00:17:27.235 "process": { 00:17:27.235 "type": "rebuild", 00:17:27.235 "target": "spare", 00:17:27.235 "progress": { 00:17:27.235 "blocks": 130560, 00:17:27.235 "percent": 66 00:17:27.235 } 00:17:27.235 }, 00:17:27.235 "base_bdevs_list": [ 00:17:27.235 { 00:17:27.235 "name": "spare", 00:17:27.235 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:27.235 "is_configured": true, 00:17:27.235 "data_offset": 0, 00:17:27.235 "data_size": 65536 00:17:27.235 }, 00:17:27.235 { 00:17:27.235 "name": "BaseBdev2", 00:17:27.235 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:27.235 "is_configured": true, 00:17:27.235 "data_offset": 0, 00:17:27.235 "data_size": 65536 00:17:27.235 }, 00:17:27.235 { 00:17:27.235 "name": "BaseBdev3", 00:17:27.235 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:27.235 "is_configured": true, 00:17:27.235 "data_offset": 0, 00:17:27.235 "data_size": 65536 00:17:27.235 }, 00:17:27.235 { 00:17:27.235 "name": "BaseBdev4", 00:17:27.235 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:27.235 "is_configured": true, 00:17:27.235 "data_offset": 0, 00:17:27.235 "data_size": 65536 00:17:27.235 } 00:17:27.235 ] 00:17:27.235 }' 00:17:27.235 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.235 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.235 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.235 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.235 18:14:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.613 "name": "raid_bdev1", 00:17:28.613 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:28.613 "strip_size_kb": 64, 00:17:28.613 "state": "online", 00:17:28.613 "raid_level": "raid5f", 00:17:28.613 "superblock": false, 00:17:28.613 "num_base_bdevs": 4, 00:17:28.613 "num_base_bdevs_discovered": 4, 00:17:28.613 "num_base_bdevs_operational": 4, 00:17:28.613 "process": { 00:17:28.613 "type": "rebuild", 00:17:28.613 "target": "spare", 00:17:28.613 "progress": { 00:17:28.613 "blocks": 151680, 00:17:28.613 "percent": 77 00:17:28.613 } 00:17:28.613 }, 00:17:28.613 "base_bdevs_list": [ 00:17:28.613 { 00:17:28.613 "name": "spare", 00:17:28.613 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:28.613 "is_configured": true, 00:17:28.613 "data_offset": 0, 00:17:28.613 "data_size": 65536 00:17:28.613 }, 00:17:28.613 { 00:17:28.613 "name": "BaseBdev2", 00:17:28.613 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:28.613 "is_configured": true, 00:17:28.613 "data_offset": 0, 00:17:28.613 "data_size": 65536 00:17:28.613 }, 00:17:28.613 { 00:17:28.613 "name": "BaseBdev3", 00:17:28.613 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:28.613 "is_configured": true, 00:17:28.613 "data_offset": 0, 00:17:28.613 "data_size": 65536 00:17:28.613 }, 00:17:28.613 { 00:17:28.613 "name": "BaseBdev4", 00:17:28.613 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:28.613 "is_configured": true, 00:17:28.613 "data_offset": 0, 00:17:28.613 "data_size": 65536 00:17:28.613 } 00:17:28.613 ] 00:17:28.613 }' 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.613 18:14:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.547 18:14:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.548 18:14:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.548 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.548 "name": "raid_bdev1", 00:17:29.548 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:29.548 "strip_size_kb": 64, 00:17:29.548 "state": "online", 00:17:29.548 "raid_level": "raid5f", 00:17:29.548 "superblock": false, 00:17:29.548 "num_base_bdevs": 4, 00:17:29.548 "num_base_bdevs_discovered": 4, 00:17:29.548 "num_base_bdevs_operational": 4, 00:17:29.548 "process": { 00:17:29.548 "type": "rebuild", 00:17:29.548 "target": "spare", 00:17:29.548 "progress": { 00:17:29.548 "blocks": 174720, 00:17:29.548 "percent": 88 00:17:29.548 } 00:17:29.548 }, 00:17:29.548 "base_bdevs_list": [ 00:17:29.548 { 00:17:29.548 "name": "spare", 00:17:29.548 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:29.548 "is_configured": true, 00:17:29.548 "data_offset": 0, 00:17:29.548 "data_size": 65536 00:17:29.548 }, 00:17:29.548 { 00:17:29.548 "name": "BaseBdev2", 00:17:29.548 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:29.548 "is_configured": true, 00:17:29.548 "data_offset": 0, 00:17:29.548 "data_size": 65536 00:17:29.548 }, 00:17:29.548 { 00:17:29.548 "name": "BaseBdev3", 00:17:29.548 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:29.548 "is_configured": true, 00:17:29.548 "data_offset": 0, 00:17:29.548 "data_size": 65536 00:17:29.548 }, 00:17:29.548 { 00:17:29.548 "name": "BaseBdev4", 00:17:29.548 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:29.548 "is_configured": true, 00:17:29.548 "data_offset": 0, 00:17:29.548 "data_size": 65536 00:17:29.548 } 00:17:29.548 ] 00:17:29.548 }' 00:17:29.548 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.548 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.548 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.548 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.548 18:14:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.929 [2024-12-06 18:14:42.711695] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:30.929 [2024-12-06 18:14:42.711843] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:30.929 [2024-12-06 18:14:42.711931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.929 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.929 "name": "raid_bdev1", 00:17:30.929 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:30.929 "strip_size_kb": 64, 00:17:30.929 "state": "online", 00:17:30.929 "raid_level": "raid5f", 00:17:30.929 "superblock": false, 00:17:30.929 "num_base_bdevs": 4, 00:17:30.929 "num_base_bdevs_discovered": 4, 00:17:30.929 "num_base_bdevs_operational": 4, 00:17:30.929 "process": { 00:17:30.929 "type": "rebuild", 00:17:30.929 "target": "spare", 00:17:30.929 "progress": { 00:17:30.929 "blocks": 195840, 00:17:30.929 "percent": 99 00:17:30.929 } 00:17:30.929 }, 00:17:30.929 "base_bdevs_list": [ 00:17:30.929 { 00:17:30.929 "name": "spare", 00:17:30.929 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:30.929 "is_configured": true, 00:17:30.929 "data_offset": 0, 00:17:30.929 "data_size": 65536 00:17:30.929 }, 00:17:30.929 { 00:17:30.929 "name": "BaseBdev2", 00:17:30.929 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:30.930 "is_configured": true, 00:17:30.930 "data_offset": 0, 00:17:30.930 "data_size": 65536 00:17:30.930 }, 00:17:30.930 { 00:17:30.930 "name": "BaseBdev3", 00:17:30.930 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:30.930 "is_configured": true, 00:17:30.930 "data_offset": 0, 00:17:30.930 "data_size": 65536 00:17:30.930 }, 00:17:30.930 { 00:17:30.930 "name": "BaseBdev4", 00:17:30.930 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:30.930 "is_configured": true, 00:17:30.930 "data_offset": 0, 00:17:30.930 "data_size": 65536 00:17:30.930 } 00:17:30.930 ] 00:17:30.930 }' 00:17:30.930 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.930 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.930 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.930 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.930 18:14:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.870 "name": "raid_bdev1", 00:17:31.870 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:31.870 "strip_size_kb": 64, 00:17:31.870 "state": "online", 00:17:31.870 "raid_level": "raid5f", 00:17:31.870 "superblock": false, 00:17:31.870 "num_base_bdevs": 4, 00:17:31.870 "num_base_bdevs_discovered": 4, 00:17:31.870 "num_base_bdevs_operational": 4, 00:17:31.870 "base_bdevs_list": [ 00:17:31.870 { 00:17:31.870 "name": "spare", 00:17:31.870 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:31.870 "is_configured": true, 00:17:31.870 "data_offset": 0, 00:17:31.870 "data_size": 65536 00:17:31.870 }, 00:17:31.870 { 00:17:31.870 "name": "BaseBdev2", 00:17:31.870 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:31.870 "is_configured": true, 00:17:31.870 "data_offset": 0, 00:17:31.870 "data_size": 65536 00:17:31.870 }, 00:17:31.870 { 00:17:31.870 "name": "BaseBdev3", 00:17:31.870 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:31.870 "is_configured": true, 00:17:31.870 "data_offset": 0, 00:17:31.870 "data_size": 65536 00:17:31.870 }, 00:17:31.870 { 00:17:31.870 "name": "BaseBdev4", 00:17:31.870 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:31.870 "is_configured": true, 00:17:31.870 "data_offset": 0, 00:17:31.870 "data_size": 65536 00:17:31.870 } 00:17:31.870 ] 00:17:31.870 }' 00:17:31.870 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.871 "name": "raid_bdev1", 00:17:31.871 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:31.871 "strip_size_kb": 64, 00:17:31.871 "state": "online", 00:17:31.871 "raid_level": "raid5f", 00:17:31.871 "superblock": false, 00:17:31.871 "num_base_bdevs": 4, 00:17:31.871 "num_base_bdevs_discovered": 4, 00:17:31.871 "num_base_bdevs_operational": 4, 00:17:31.871 "base_bdevs_list": [ 00:17:31.871 { 00:17:31.871 "name": "spare", 00:17:31.871 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:31.871 "is_configured": true, 00:17:31.871 "data_offset": 0, 00:17:31.871 "data_size": 65536 00:17:31.871 }, 00:17:31.871 { 00:17:31.871 "name": "BaseBdev2", 00:17:31.871 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:31.871 "is_configured": true, 00:17:31.871 "data_offset": 0, 00:17:31.871 "data_size": 65536 00:17:31.871 }, 00:17:31.871 { 00:17:31.871 "name": "BaseBdev3", 00:17:31.871 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:31.871 "is_configured": true, 00:17:31.871 "data_offset": 0, 00:17:31.871 "data_size": 65536 00:17:31.871 }, 00:17:31.871 { 00:17:31.871 "name": "BaseBdev4", 00:17:31.871 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:31.871 "is_configured": true, 00:17:31.871 "data_offset": 0, 00:17:31.871 "data_size": 65536 00:17:31.871 } 00:17:31.871 ] 00:17:31.871 }' 00:17:31.871 18:14:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.131 "name": "raid_bdev1", 00:17:32.131 "uuid": "dcc278fe-a08a-4a11-a758-59ffc348604b", 00:17:32.131 "strip_size_kb": 64, 00:17:32.131 "state": "online", 00:17:32.131 "raid_level": "raid5f", 00:17:32.131 "superblock": false, 00:17:32.131 "num_base_bdevs": 4, 00:17:32.131 "num_base_bdevs_discovered": 4, 00:17:32.131 "num_base_bdevs_operational": 4, 00:17:32.131 "base_bdevs_list": [ 00:17:32.131 { 00:17:32.131 "name": "spare", 00:17:32.131 "uuid": "ea24057e-3566-5965-9dcb-e1b70e8e21d6", 00:17:32.131 "is_configured": true, 00:17:32.131 "data_offset": 0, 00:17:32.131 "data_size": 65536 00:17:32.131 }, 00:17:32.131 { 00:17:32.131 "name": "BaseBdev2", 00:17:32.131 "uuid": "63e54c89-4fa5-505e-8b82-b62687cce0e2", 00:17:32.131 "is_configured": true, 00:17:32.131 "data_offset": 0, 00:17:32.131 "data_size": 65536 00:17:32.131 }, 00:17:32.131 { 00:17:32.131 "name": "BaseBdev3", 00:17:32.131 "uuid": "2c564b63-79f0-53ff-85da-a31cc9070ad9", 00:17:32.131 "is_configured": true, 00:17:32.131 "data_offset": 0, 00:17:32.131 "data_size": 65536 00:17:32.131 }, 00:17:32.131 { 00:17:32.131 "name": "BaseBdev4", 00:17:32.131 "uuid": "999a5d56-f8eb-564a-9799-61ddda756046", 00:17:32.131 "is_configured": true, 00:17:32.131 "data_offset": 0, 00:17:32.131 "data_size": 65536 00:17:32.131 } 00:17:32.131 ] 00:17:32.131 }' 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.131 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.390 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.390 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.390 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.391 [2024-12-06 18:14:44.525329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.391 [2024-12-06 18:14:44.525367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.391 [2024-12-06 18:14:44.525475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.391 [2024-12-06 18:14:44.525587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.391 [2024-12-06 18:14:44.525600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:32.391 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.391 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.391 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.391 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:32.391 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.391 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.650 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:32.911 /dev/nbd0 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.911 1+0 records in 00:17:32.911 1+0 records out 00:17:32.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244031 s, 16.8 MB/s 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.911 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:32.912 18:14:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:32.912 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.912 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.912 18:14:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:33.171 /dev/nbd1 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.171 1+0 records in 00:17:33.171 1+0 records out 00:17:33.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365616 s, 11.2 MB/s 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.171 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.430 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:33.688 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:33.688 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:33.688 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:33.688 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.688 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.688 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:33.688 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85171 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85171 ']' 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85171 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85171 00:17:33.966 killing process with pid 85171 00:17:33.966 Received shutdown signal, test time was about 60.000000 seconds 00:17:33.966 00:17:33.966 Latency(us) 00:17:33.966 [2024-12-06T18:14:46.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.966 [2024-12-06T18:14:46.134Z] =================================================================================================================== 00:17:33.966 [2024-12-06T18:14:46.134Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85171' 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85171 00:17:33.966 [2024-12-06 18:14:45.895670] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.966 18:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85171 00:17:34.533 [2024-12-06 18:14:46.440894] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:35.912 00:17:35.912 real 0m20.542s 00:17:35.912 user 0m24.573s 00:17:35.912 sys 0m2.337s 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.912 ************************************ 00:17:35.912 END TEST raid5f_rebuild_test 00:17:35.912 ************************************ 00:17:35.912 18:14:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:35.912 18:14:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:35.912 18:14:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.912 18:14:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.912 ************************************ 00:17:35.912 START TEST raid5f_rebuild_test_sb 00:17:35.912 ************************************ 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85693 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85693 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85693 ']' 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.912 18:14:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.912 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:35.912 Zero copy mechanism will not be used. 00:17:35.912 [2024-12-06 18:14:47.840596] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:17:35.912 [2024-12-06 18:14:47.840736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85693 ] 00:17:35.912 [2024-12-06 18:14:48.021265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.172 [2024-12-06 18:14:48.152924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.431 [2024-12-06 18:14:48.375475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.431 [2024-12-06 18:14:48.375523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.691 BaseBdev1_malloc 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.691 [2024-12-06 18:14:48.772599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:36.691 [2024-12-06 18:14:48.772672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.691 [2024-12-06 18:14:48.772695] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:36.691 [2024-12-06 18:14:48.772707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.691 [2024-12-06 18:14:48.775022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.691 [2024-12-06 18:14:48.775134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:36.691 BaseBdev1 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.691 BaseBdev2_malloc 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.691 [2024-12-06 18:14:48.829622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:36.691 [2024-12-06 18:14:48.829692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.691 [2024-12-06 18:14:48.829715] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:36.691 [2024-12-06 18:14:48.829727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.691 [2024-12-06 18:14:48.832287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.691 [2024-12-06 18:14:48.832347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:36.691 BaseBdev2 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.691 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.952 BaseBdev3_malloc 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.952 [2024-12-06 18:14:48.898351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:36.952 [2024-12-06 18:14:48.898502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.952 [2024-12-06 18:14:48.898535] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:36.952 [2024-12-06 18:14:48.898548] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.952 [2024-12-06 18:14:48.900979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.952 [2024-12-06 18:14:48.901027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:36.952 BaseBdev3 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.952 BaseBdev4_malloc 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.952 [2024-12-06 18:14:48.955515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:36.952 [2024-12-06 18:14:48.955657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.952 [2024-12-06 18:14:48.955705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:36.952 [2024-12-06 18:14:48.955719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.952 [2024-12-06 18:14:48.958148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.952 [2024-12-06 18:14:48.958193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:36.952 BaseBdev4 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.952 18:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.952 spare_malloc 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.952 spare_delay 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.952 [2024-12-06 18:14:49.027348] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:36.952 [2024-12-06 18:14:49.027420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.952 [2024-12-06 18:14:49.027445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:36.952 [2024-12-06 18:14:49.027457] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.952 [2024-12-06 18:14:49.029933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.952 [2024-12-06 18:14:49.029979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:36.952 spare 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.952 [2024-12-06 18:14:49.039421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.952 [2024-12-06 18:14:49.042028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.952 [2024-12-06 18:14:49.042165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.952 [2024-12-06 18:14:49.042253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:36.952 [2024-12-06 18:14:49.042498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:36.952 [2024-12-06 18:14:49.042577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:36.952 [2024-12-06 18:14:49.042939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:36.952 [2024-12-06 18:14:49.052243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:36.952 [2024-12-06 18:14:49.052288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:36.952 [2024-12-06 18:14:49.052572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.952 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.952 "name": "raid_bdev1", 00:17:36.952 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:36.953 "strip_size_kb": 64, 00:17:36.953 "state": "online", 00:17:36.953 "raid_level": "raid5f", 00:17:36.953 "superblock": true, 00:17:36.953 "num_base_bdevs": 4, 00:17:36.953 "num_base_bdevs_discovered": 4, 00:17:36.953 "num_base_bdevs_operational": 4, 00:17:36.953 "base_bdevs_list": [ 00:17:36.953 { 00:17:36.953 "name": "BaseBdev1", 00:17:36.953 "uuid": "e95ccd15-371e-5292-8119-73c00137e7f5", 00:17:36.953 "is_configured": true, 00:17:36.953 "data_offset": 2048, 00:17:36.953 "data_size": 63488 00:17:36.953 }, 00:17:36.953 { 00:17:36.953 "name": "BaseBdev2", 00:17:36.953 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:36.953 "is_configured": true, 00:17:36.953 "data_offset": 2048, 00:17:36.953 "data_size": 63488 00:17:36.953 }, 00:17:36.953 { 00:17:36.953 "name": "BaseBdev3", 00:17:36.953 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:36.953 "is_configured": true, 00:17:36.953 "data_offset": 2048, 00:17:36.953 "data_size": 63488 00:17:36.953 }, 00:17:36.953 { 00:17:36.953 "name": "BaseBdev4", 00:17:36.953 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:36.953 "is_configured": true, 00:17:36.953 "data_offset": 2048, 00:17:36.953 "data_size": 63488 00:17:36.953 } 00:17:36.953 ] 00:17:36.953 }' 00:17:36.953 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.953 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.520 [2024-12-06 18:14:49.482064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.520 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:37.779 [2024-12-06 18:14:49.761428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:37.780 /dev/nbd0 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.780 1+0 records in 00:17:37.780 1+0 records out 00:17:37.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050476 s, 8.1 MB/s 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:37.780 18:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:38.349 496+0 records in 00:17:38.349 496+0 records out 00:17:38.349 97517568 bytes (98 MB, 93 MiB) copied, 0.594288 s, 164 MB/s 00:17:38.349 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:38.349 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.349 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:38.349 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.349 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:38.349 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.349 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:38.609 [2024-12-06 18:14:50.677666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.609 [2024-12-06 18:14:50.697707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.609 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.609 "name": "raid_bdev1", 00:17:38.609 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:38.609 "strip_size_kb": 64, 00:17:38.609 "state": "online", 00:17:38.609 "raid_level": "raid5f", 00:17:38.609 "superblock": true, 00:17:38.609 "num_base_bdevs": 4, 00:17:38.609 "num_base_bdevs_discovered": 3, 00:17:38.609 "num_base_bdevs_operational": 3, 00:17:38.610 "base_bdevs_list": [ 00:17:38.610 { 00:17:38.610 "name": null, 00:17:38.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.610 "is_configured": false, 00:17:38.610 "data_offset": 0, 00:17:38.610 "data_size": 63488 00:17:38.610 }, 00:17:38.610 { 00:17:38.610 "name": "BaseBdev2", 00:17:38.610 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:38.610 "is_configured": true, 00:17:38.610 "data_offset": 2048, 00:17:38.610 "data_size": 63488 00:17:38.610 }, 00:17:38.610 { 00:17:38.610 "name": "BaseBdev3", 00:17:38.610 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:38.610 "is_configured": true, 00:17:38.610 "data_offset": 2048, 00:17:38.610 "data_size": 63488 00:17:38.610 }, 00:17:38.610 { 00:17:38.610 "name": "BaseBdev4", 00:17:38.610 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:38.610 "is_configured": true, 00:17:38.610 "data_offset": 2048, 00:17:38.610 "data_size": 63488 00:17:38.610 } 00:17:38.610 ] 00:17:38.610 }' 00:17:38.610 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.610 18:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.177 18:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.177 18:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.177 18:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.177 [2024-12-06 18:14:51.148974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.177 [2024-12-06 18:14:51.167555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:39.177 18:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.177 18:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:39.178 [2024-12-06 18:14:51.178884] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.116 "name": "raid_bdev1", 00:17:40.116 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:40.116 "strip_size_kb": 64, 00:17:40.116 "state": "online", 00:17:40.116 "raid_level": "raid5f", 00:17:40.116 "superblock": true, 00:17:40.116 "num_base_bdevs": 4, 00:17:40.116 "num_base_bdevs_discovered": 4, 00:17:40.116 "num_base_bdevs_operational": 4, 00:17:40.116 "process": { 00:17:40.116 "type": "rebuild", 00:17:40.116 "target": "spare", 00:17:40.116 "progress": { 00:17:40.116 "blocks": 17280, 00:17:40.116 "percent": 9 00:17:40.116 } 00:17:40.116 }, 00:17:40.116 "base_bdevs_list": [ 00:17:40.116 { 00:17:40.116 "name": "spare", 00:17:40.116 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:40.116 "is_configured": true, 00:17:40.116 "data_offset": 2048, 00:17:40.116 "data_size": 63488 00:17:40.116 }, 00:17:40.116 { 00:17:40.116 "name": "BaseBdev2", 00:17:40.116 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:40.116 "is_configured": true, 00:17:40.116 "data_offset": 2048, 00:17:40.116 "data_size": 63488 00:17:40.116 }, 00:17:40.116 { 00:17:40.116 "name": "BaseBdev3", 00:17:40.116 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:40.116 "is_configured": true, 00:17:40.116 "data_offset": 2048, 00:17:40.116 "data_size": 63488 00:17:40.116 }, 00:17:40.116 { 00:17:40.116 "name": "BaseBdev4", 00:17:40.116 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:40.116 "is_configured": true, 00:17:40.116 "data_offset": 2048, 00:17:40.116 "data_size": 63488 00:17:40.116 } 00:17:40.116 ] 00:17:40.116 }' 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.116 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.377 [2024-12-06 18:14:52.335055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.377 [2024-12-06 18:14:52.389483] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:40.377 [2024-12-06 18:14:52.389700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.377 [2024-12-06 18:14:52.389776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.377 [2024-12-06 18:14:52.389807] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.377 "name": "raid_bdev1", 00:17:40.377 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:40.377 "strip_size_kb": 64, 00:17:40.377 "state": "online", 00:17:40.377 "raid_level": "raid5f", 00:17:40.377 "superblock": true, 00:17:40.377 "num_base_bdevs": 4, 00:17:40.377 "num_base_bdevs_discovered": 3, 00:17:40.377 "num_base_bdevs_operational": 3, 00:17:40.377 "base_bdevs_list": [ 00:17:40.377 { 00:17:40.377 "name": null, 00:17:40.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.377 "is_configured": false, 00:17:40.377 "data_offset": 0, 00:17:40.377 "data_size": 63488 00:17:40.377 }, 00:17:40.377 { 00:17:40.377 "name": "BaseBdev2", 00:17:40.377 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:40.377 "is_configured": true, 00:17:40.377 "data_offset": 2048, 00:17:40.377 "data_size": 63488 00:17:40.377 }, 00:17:40.377 { 00:17:40.377 "name": "BaseBdev3", 00:17:40.377 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:40.377 "is_configured": true, 00:17:40.377 "data_offset": 2048, 00:17:40.377 "data_size": 63488 00:17:40.377 }, 00:17:40.377 { 00:17:40.377 "name": "BaseBdev4", 00:17:40.377 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:40.377 "is_configured": true, 00:17:40.377 "data_offset": 2048, 00:17:40.377 "data_size": 63488 00:17:40.377 } 00:17:40.377 ] 00:17:40.377 }' 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.377 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.952 "name": "raid_bdev1", 00:17:40.952 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:40.952 "strip_size_kb": 64, 00:17:40.952 "state": "online", 00:17:40.952 "raid_level": "raid5f", 00:17:40.952 "superblock": true, 00:17:40.952 "num_base_bdevs": 4, 00:17:40.952 "num_base_bdevs_discovered": 3, 00:17:40.952 "num_base_bdevs_operational": 3, 00:17:40.952 "base_bdevs_list": [ 00:17:40.952 { 00:17:40.952 "name": null, 00:17:40.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.952 "is_configured": false, 00:17:40.952 "data_offset": 0, 00:17:40.952 "data_size": 63488 00:17:40.952 }, 00:17:40.952 { 00:17:40.952 "name": "BaseBdev2", 00:17:40.952 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:40.952 "is_configured": true, 00:17:40.952 "data_offset": 2048, 00:17:40.952 "data_size": 63488 00:17:40.952 }, 00:17:40.952 { 00:17:40.952 "name": "BaseBdev3", 00:17:40.952 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:40.952 "is_configured": true, 00:17:40.952 "data_offset": 2048, 00:17:40.952 "data_size": 63488 00:17:40.952 }, 00:17:40.952 { 00:17:40.952 "name": "BaseBdev4", 00:17:40.952 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:40.952 "is_configured": true, 00:17:40.952 "data_offset": 2048, 00:17:40.952 "data_size": 63488 00:17:40.952 } 00:17:40.952 ] 00:17:40.952 }' 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.952 18:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.952 18:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.952 18:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.952 18:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.952 18:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.952 [2024-12-06 18:14:53.023243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.952 [2024-12-06 18:14:53.040791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:40.952 18:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.952 18:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:40.952 [2024-12-06 18:14:53.051832] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.891 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.891 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.891 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.891 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.891 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.891 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.891 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.891 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.891 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.192 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.192 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.192 "name": "raid_bdev1", 00:17:42.192 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:42.192 "strip_size_kb": 64, 00:17:42.192 "state": "online", 00:17:42.192 "raid_level": "raid5f", 00:17:42.192 "superblock": true, 00:17:42.192 "num_base_bdevs": 4, 00:17:42.192 "num_base_bdevs_discovered": 4, 00:17:42.192 "num_base_bdevs_operational": 4, 00:17:42.192 "process": { 00:17:42.192 "type": "rebuild", 00:17:42.192 "target": "spare", 00:17:42.192 "progress": { 00:17:42.192 "blocks": 17280, 00:17:42.192 "percent": 9 00:17:42.192 } 00:17:42.192 }, 00:17:42.192 "base_bdevs_list": [ 00:17:42.192 { 00:17:42.192 "name": "spare", 00:17:42.192 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:42.192 "is_configured": true, 00:17:42.192 "data_offset": 2048, 00:17:42.192 "data_size": 63488 00:17:42.192 }, 00:17:42.192 { 00:17:42.192 "name": "BaseBdev2", 00:17:42.192 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:42.192 "is_configured": true, 00:17:42.192 "data_offset": 2048, 00:17:42.192 "data_size": 63488 00:17:42.193 }, 00:17:42.193 { 00:17:42.193 "name": "BaseBdev3", 00:17:42.193 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:42.193 "is_configured": true, 00:17:42.193 "data_offset": 2048, 00:17:42.193 "data_size": 63488 00:17:42.193 }, 00:17:42.193 { 00:17:42.193 "name": "BaseBdev4", 00:17:42.193 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:42.193 "is_configured": true, 00:17:42.193 "data_offset": 2048, 00:17:42.193 "data_size": 63488 00:17:42.193 } 00:17:42.193 ] 00:17:42.193 }' 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:42.193 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=668 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.193 "name": "raid_bdev1", 00:17:42.193 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:42.193 "strip_size_kb": 64, 00:17:42.193 "state": "online", 00:17:42.193 "raid_level": "raid5f", 00:17:42.193 "superblock": true, 00:17:42.193 "num_base_bdevs": 4, 00:17:42.193 "num_base_bdevs_discovered": 4, 00:17:42.193 "num_base_bdevs_operational": 4, 00:17:42.193 "process": { 00:17:42.193 "type": "rebuild", 00:17:42.193 "target": "spare", 00:17:42.193 "progress": { 00:17:42.193 "blocks": 21120, 00:17:42.193 "percent": 11 00:17:42.193 } 00:17:42.193 }, 00:17:42.193 "base_bdevs_list": [ 00:17:42.193 { 00:17:42.193 "name": "spare", 00:17:42.193 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:42.193 "is_configured": true, 00:17:42.193 "data_offset": 2048, 00:17:42.193 "data_size": 63488 00:17:42.193 }, 00:17:42.193 { 00:17:42.193 "name": "BaseBdev2", 00:17:42.193 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:42.193 "is_configured": true, 00:17:42.193 "data_offset": 2048, 00:17:42.193 "data_size": 63488 00:17:42.193 }, 00:17:42.193 { 00:17:42.193 "name": "BaseBdev3", 00:17:42.193 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:42.193 "is_configured": true, 00:17:42.193 "data_offset": 2048, 00:17:42.193 "data_size": 63488 00:17:42.193 }, 00:17:42.193 { 00:17:42.193 "name": "BaseBdev4", 00:17:42.193 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:42.193 "is_configured": true, 00:17:42.193 "data_offset": 2048, 00:17:42.193 "data_size": 63488 00:17:42.193 } 00:17:42.193 ] 00:17:42.193 }' 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.193 18:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.579 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.579 "name": "raid_bdev1", 00:17:43.579 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:43.579 "strip_size_kb": 64, 00:17:43.579 "state": "online", 00:17:43.579 "raid_level": "raid5f", 00:17:43.579 "superblock": true, 00:17:43.579 "num_base_bdevs": 4, 00:17:43.579 "num_base_bdevs_discovered": 4, 00:17:43.579 "num_base_bdevs_operational": 4, 00:17:43.580 "process": { 00:17:43.580 "type": "rebuild", 00:17:43.580 "target": "spare", 00:17:43.580 "progress": { 00:17:43.580 "blocks": 42240, 00:17:43.580 "percent": 22 00:17:43.580 } 00:17:43.580 }, 00:17:43.580 "base_bdevs_list": [ 00:17:43.580 { 00:17:43.580 "name": "spare", 00:17:43.580 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:43.580 "is_configured": true, 00:17:43.580 "data_offset": 2048, 00:17:43.580 "data_size": 63488 00:17:43.580 }, 00:17:43.580 { 00:17:43.580 "name": "BaseBdev2", 00:17:43.580 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:43.580 "is_configured": true, 00:17:43.580 "data_offset": 2048, 00:17:43.580 "data_size": 63488 00:17:43.580 }, 00:17:43.580 { 00:17:43.580 "name": "BaseBdev3", 00:17:43.580 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:43.580 "is_configured": true, 00:17:43.580 "data_offset": 2048, 00:17:43.580 "data_size": 63488 00:17:43.580 }, 00:17:43.580 { 00:17:43.580 "name": "BaseBdev4", 00:17:43.580 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:43.580 "is_configured": true, 00:17:43.580 "data_offset": 2048, 00:17:43.580 "data_size": 63488 00:17:43.580 } 00:17:43.580 ] 00:17:43.580 }' 00:17:43.580 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.580 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.580 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.580 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.580 18:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.612 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.612 "name": "raid_bdev1", 00:17:44.612 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:44.612 "strip_size_kb": 64, 00:17:44.612 "state": "online", 00:17:44.612 "raid_level": "raid5f", 00:17:44.612 "superblock": true, 00:17:44.612 "num_base_bdevs": 4, 00:17:44.612 "num_base_bdevs_discovered": 4, 00:17:44.612 "num_base_bdevs_operational": 4, 00:17:44.612 "process": { 00:17:44.612 "type": "rebuild", 00:17:44.612 "target": "spare", 00:17:44.612 "progress": { 00:17:44.612 "blocks": 63360, 00:17:44.612 "percent": 33 00:17:44.612 } 00:17:44.612 }, 00:17:44.612 "base_bdevs_list": [ 00:17:44.612 { 00:17:44.612 "name": "spare", 00:17:44.612 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:44.612 "is_configured": true, 00:17:44.612 "data_offset": 2048, 00:17:44.612 "data_size": 63488 00:17:44.612 }, 00:17:44.612 { 00:17:44.612 "name": "BaseBdev2", 00:17:44.612 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:44.612 "is_configured": true, 00:17:44.612 "data_offset": 2048, 00:17:44.612 "data_size": 63488 00:17:44.612 }, 00:17:44.612 { 00:17:44.612 "name": "BaseBdev3", 00:17:44.612 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:44.613 "is_configured": true, 00:17:44.613 "data_offset": 2048, 00:17:44.613 "data_size": 63488 00:17:44.613 }, 00:17:44.613 { 00:17:44.613 "name": "BaseBdev4", 00:17:44.613 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:44.613 "is_configured": true, 00:17:44.613 "data_offset": 2048, 00:17:44.613 "data_size": 63488 00:17:44.613 } 00:17:44.613 ] 00:17:44.613 }' 00:17:44.613 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.613 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.613 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.613 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.613 18:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.546 "name": "raid_bdev1", 00:17:45.546 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:45.546 "strip_size_kb": 64, 00:17:45.546 "state": "online", 00:17:45.546 "raid_level": "raid5f", 00:17:45.546 "superblock": true, 00:17:45.546 "num_base_bdevs": 4, 00:17:45.546 "num_base_bdevs_discovered": 4, 00:17:45.546 "num_base_bdevs_operational": 4, 00:17:45.546 "process": { 00:17:45.546 "type": "rebuild", 00:17:45.546 "target": "spare", 00:17:45.546 "progress": { 00:17:45.546 "blocks": 86400, 00:17:45.546 "percent": 45 00:17:45.546 } 00:17:45.546 }, 00:17:45.546 "base_bdevs_list": [ 00:17:45.546 { 00:17:45.546 "name": "spare", 00:17:45.546 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:45.546 "is_configured": true, 00:17:45.546 "data_offset": 2048, 00:17:45.546 "data_size": 63488 00:17:45.546 }, 00:17:45.546 { 00:17:45.546 "name": "BaseBdev2", 00:17:45.546 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:45.546 "is_configured": true, 00:17:45.546 "data_offset": 2048, 00:17:45.546 "data_size": 63488 00:17:45.546 }, 00:17:45.546 { 00:17:45.546 "name": "BaseBdev3", 00:17:45.546 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:45.546 "is_configured": true, 00:17:45.546 "data_offset": 2048, 00:17:45.546 "data_size": 63488 00:17:45.546 }, 00:17:45.546 { 00:17:45.546 "name": "BaseBdev4", 00:17:45.546 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:45.546 "is_configured": true, 00:17:45.546 "data_offset": 2048, 00:17:45.546 "data_size": 63488 00:17:45.546 } 00:17:45.546 ] 00:17:45.546 }' 00:17:45.546 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.804 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.804 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.804 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.804 18:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.738 "name": "raid_bdev1", 00:17:46.738 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:46.738 "strip_size_kb": 64, 00:17:46.738 "state": "online", 00:17:46.738 "raid_level": "raid5f", 00:17:46.738 "superblock": true, 00:17:46.738 "num_base_bdevs": 4, 00:17:46.738 "num_base_bdevs_discovered": 4, 00:17:46.738 "num_base_bdevs_operational": 4, 00:17:46.738 "process": { 00:17:46.738 "type": "rebuild", 00:17:46.738 "target": "spare", 00:17:46.738 "progress": { 00:17:46.738 "blocks": 109440, 00:17:46.738 "percent": 57 00:17:46.738 } 00:17:46.738 }, 00:17:46.738 "base_bdevs_list": [ 00:17:46.738 { 00:17:46.738 "name": "spare", 00:17:46.738 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:46.738 "is_configured": true, 00:17:46.738 "data_offset": 2048, 00:17:46.738 "data_size": 63488 00:17:46.738 }, 00:17:46.738 { 00:17:46.738 "name": "BaseBdev2", 00:17:46.738 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:46.738 "is_configured": true, 00:17:46.738 "data_offset": 2048, 00:17:46.738 "data_size": 63488 00:17:46.738 }, 00:17:46.738 { 00:17:46.738 "name": "BaseBdev3", 00:17:46.738 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:46.738 "is_configured": true, 00:17:46.738 "data_offset": 2048, 00:17:46.738 "data_size": 63488 00:17:46.738 }, 00:17:46.738 { 00:17:46.738 "name": "BaseBdev4", 00:17:46.738 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:46.738 "is_configured": true, 00:17:46.738 "data_offset": 2048, 00:17:46.738 "data_size": 63488 00:17:46.738 } 00:17:46.738 ] 00:17:46.738 }' 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.738 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.996 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.996 18:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.931 18:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.931 "name": "raid_bdev1", 00:17:47.931 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:47.931 "strip_size_kb": 64, 00:17:47.931 "state": "online", 00:17:47.931 "raid_level": "raid5f", 00:17:47.931 "superblock": true, 00:17:47.931 "num_base_bdevs": 4, 00:17:47.931 "num_base_bdevs_discovered": 4, 00:17:47.931 "num_base_bdevs_operational": 4, 00:17:47.931 "process": { 00:17:47.931 "type": "rebuild", 00:17:47.931 "target": "spare", 00:17:47.931 "progress": { 00:17:47.931 "blocks": 130560, 00:17:47.931 "percent": 68 00:17:47.931 } 00:17:47.931 }, 00:17:47.931 "base_bdevs_list": [ 00:17:47.931 { 00:17:47.931 "name": "spare", 00:17:47.931 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:47.931 "is_configured": true, 00:17:47.931 "data_offset": 2048, 00:17:47.931 "data_size": 63488 00:17:47.931 }, 00:17:47.931 { 00:17:47.931 "name": "BaseBdev2", 00:17:47.931 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:47.931 "is_configured": true, 00:17:47.931 "data_offset": 2048, 00:17:47.931 "data_size": 63488 00:17:47.931 }, 00:17:47.931 { 00:17:47.931 "name": "BaseBdev3", 00:17:47.931 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:47.931 "is_configured": true, 00:17:47.931 "data_offset": 2048, 00:17:47.931 "data_size": 63488 00:17:47.931 }, 00:17:47.931 { 00:17:47.931 "name": "BaseBdev4", 00:17:47.931 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:47.931 "is_configured": true, 00:17:47.931 "data_offset": 2048, 00:17:47.931 "data_size": 63488 00:17:47.931 } 00:17:47.931 ] 00:17:47.931 }' 00:17:47.931 18:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.931 18:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.931 18:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.190 18:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.190 18:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:49.128 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.128 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.128 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.128 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.128 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.128 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.128 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.128 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.129 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.129 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.129 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.129 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.129 "name": "raid_bdev1", 00:17:49.129 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:49.129 "strip_size_kb": 64, 00:17:49.129 "state": "online", 00:17:49.129 "raid_level": "raid5f", 00:17:49.129 "superblock": true, 00:17:49.129 "num_base_bdevs": 4, 00:17:49.129 "num_base_bdevs_discovered": 4, 00:17:49.129 "num_base_bdevs_operational": 4, 00:17:49.129 "process": { 00:17:49.129 "type": "rebuild", 00:17:49.129 "target": "spare", 00:17:49.129 "progress": { 00:17:49.129 "blocks": 153600, 00:17:49.129 "percent": 80 00:17:49.129 } 00:17:49.129 }, 00:17:49.129 "base_bdevs_list": [ 00:17:49.129 { 00:17:49.129 "name": "spare", 00:17:49.129 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:49.129 "is_configured": true, 00:17:49.129 "data_offset": 2048, 00:17:49.129 "data_size": 63488 00:17:49.129 }, 00:17:49.129 { 00:17:49.129 "name": "BaseBdev2", 00:17:49.129 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:49.129 "is_configured": true, 00:17:49.129 "data_offset": 2048, 00:17:49.129 "data_size": 63488 00:17:49.129 }, 00:17:49.129 { 00:17:49.129 "name": "BaseBdev3", 00:17:49.129 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:49.129 "is_configured": true, 00:17:49.129 "data_offset": 2048, 00:17:49.129 "data_size": 63488 00:17:49.129 }, 00:17:49.129 { 00:17:49.129 "name": "BaseBdev4", 00:17:49.129 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:49.129 "is_configured": true, 00:17:49.129 "data_offset": 2048, 00:17:49.129 "data_size": 63488 00:17:49.129 } 00:17:49.129 ] 00:17:49.129 }' 00:17:49.129 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.129 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.129 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.129 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.129 18:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.507 "name": "raid_bdev1", 00:17:50.507 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:50.507 "strip_size_kb": 64, 00:17:50.507 "state": "online", 00:17:50.507 "raid_level": "raid5f", 00:17:50.507 "superblock": true, 00:17:50.507 "num_base_bdevs": 4, 00:17:50.507 "num_base_bdevs_discovered": 4, 00:17:50.507 "num_base_bdevs_operational": 4, 00:17:50.507 "process": { 00:17:50.507 "type": "rebuild", 00:17:50.507 "target": "spare", 00:17:50.507 "progress": { 00:17:50.507 "blocks": 174720, 00:17:50.507 "percent": 91 00:17:50.507 } 00:17:50.507 }, 00:17:50.507 "base_bdevs_list": [ 00:17:50.507 { 00:17:50.507 "name": "spare", 00:17:50.507 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:50.507 "is_configured": true, 00:17:50.507 "data_offset": 2048, 00:17:50.507 "data_size": 63488 00:17:50.507 }, 00:17:50.507 { 00:17:50.507 "name": "BaseBdev2", 00:17:50.507 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:50.507 "is_configured": true, 00:17:50.507 "data_offset": 2048, 00:17:50.507 "data_size": 63488 00:17:50.507 }, 00:17:50.507 { 00:17:50.507 "name": "BaseBdev3", 00:17:50.507 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:50.507 "is_configured": true, 00:17:50.507 "data_offset": 2048, 00:17:50.507 "data_size": 63488 00:17:50.507 }, 00:17:50.507 { 00:17:50.507 "name": "BaseBdev4", 00:17:50.507 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:50.507 "is_configured": true, 00:17:50.507 "data_offset": 2048, 00:17:50.507 "data_size": 63488 00:17:50.507 } 00:17:50.507 ] 00:17:50.507 }' 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.507 18:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:51.077 [2024-12-06 18:15:03.132555] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:51.077 [2024-12-06 18:15:03.132734] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:51.077 [2024-12-06 18:15:03.132958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.336 "name": "raid_bdev1", 00:17:51.336 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:51.336 "strip_size_kb": 64, 00:17:51.336 "state": "online", 00:17:51.336 "raid_level": "raid5f", 00:17:51.336 "superblock": true, 00:17:51.336 "num_base_bdevs": 4, 00:17:51.336 "num_base_bdevs_discovered": 4, 00:17:51.336 "num_base_bdevs_operational": 4, 00:17:51.336 "base_bdevs_list": [ 00:17:51.336 { 00:17:51.336 "name": "spare", 00:17:51.336 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:51.336 "is_configured": true, 00:17:51.336 "data_offset": 2048, 00:17:51.336 "data_size": 63488 00:17:51.336 }, 00:17:51.336 { 00:17:51.336 "name": "BaseBdev2", 00:17:51.336 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:51.336 "is_configured": true, 00:17:51.336 "data_offset": 2048, 00:17:51.336 "data_size": 63488 00:17:51.336 }, 00:17:51.336 { 00:17:51.336 "name": "BaseBdev3", 00:17:51.336 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:51.336 "is_configured": true, 00:17:51.336 "data_offset": 2048, 00:17:51.336 "data_size": 63488 00:17:51.336 }, 00:17:51.336 { 00:17:51.336 "name": "BaseBdev4", 00:17:51.336 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:51.336 "is_configured": true, 00:17:51.336 "data_offset": 2048, 00:17:51.336 "data_size": 63488 00:17:51.336 } 00:17:51.336 ] 00:17:51.336 }' 00:17:51.336 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.596 "name": "raid_bdev1", 00:17:51.596 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:51.596 "strip_size_kb": 64, 00:17:51.596 "state": "online", 00:17:51.596 "raid_level": "raid5f", 00:17:51.596 "superblock": true, 00:17:51.596 "num_base_bdevs": 4, 00:17:51.596 "num_base_bdevs_discovered": 4, 00:17:51.596 "num_base_bdevs_operational": 4, 00:17:51.596 "base_bdevs_list": [ 00:17:51.596 { 00:17:51.596 "name": "spare", 00:17:51.596 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:51.596 "is_configured": true, 00:17:51.596 "data_offset": 2048, 00:17:51.596 "data_size": 63488 00:17:51.596 }, 00:17:51.596 { 00:17:51.596 "name": "BaseBdev2", 00:17:51.596 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:51.596 "is_configured": true, 00:17:51.596 "data_offset": 2048, 00:17:51.596 "data_size": 63488 00:17:51.596 }, 00:17:51.596 { 00:17:51.596 "name": "BaseBdev3", 00:17:51.596 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:51.596 "is_configured": true, 00:17:51.596 "data_offset": 2048, 00:17:51.596 "data_size": 63488 00:17:51.596 }, 00:17:51.596 { 00:17:51.596 "name": "BaseBdev4", 00:17:51.596 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:51.596 "is_configured": true, 00:17:51.596 "data_offset": 2048, 00:17:51.596 "data_size": 63488 00:17:51.596 } 00:17:51.596 ] 00:17:51.596 }' 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.596 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.597 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.856 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.856 "name": "raid_bdev1", 00:17:51.856 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:51.856 "strip_size_kb": 64, 00:17:51.856 "state": "online", 00:17:51.856 "raid_level": "raid5f", 00:17:51.856 "superblock": true, 00:17:51.856 "num_base_bdevs": 4, 00:17:51.856 "num_base_bdevs_discovered": 4, 00:17:51.856 "num_base_bdevs_operational": 4, 00:17:51.856 "base_bdevs_list": [ 00:17:51.856 { 00:17:51.856 "name": "spare", 00:17:51.856 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:51.856 "is_configured": true, 00:17:51.856 "data_offset": 2048, 00:17:51.856 "data_size": 63488 00:17:51.856 }, 00:17:51.856 { 00:17:51.856 "name": "BaseBdev2", 00:17:51.856 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:51.856 "is_configured": true, 00:17:51.856 "data_offset": 2048, 00:17:51.856 "data_size": 63488 00:17:51.856 }, 00:17:51.856 { 00:17:51.856 "name": "BaseBdev3", 00:17:51.856 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:51.856 "is_configured": true, 00:17:51.856 "data_offset": 2048, 00:17:51.856 "data_size": 63488 00:17:51.856 }, 00:17:51.856 { 00:17:51.856 "name": "BaseBdev4", 00:17:51.856 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:51.856 "is_configured": true, 00:17:51.856 "data_offset": 2048, 00:17:51.856 "data_size": 63488 00:17:51.856 } 00:17:51.856 ] 00:17:51.856 }' 00:17:51.856 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.856 18:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.116 [2024-12-06 18:15:04.221567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.116 [2024-12-06 18:15:04.221607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.116 [2024-12-06 18:15:04.221711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.116 [2024-12-06 18:15:04.221832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.116 [2024-12-06 18:15:04.221858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:52.116 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:52.376 /dev/nbd0 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:52.376 1+0 records in 00:17:52.376 1+0 records out 00:17:52.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514263 s, 8.0 MB/s 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:52.376 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:52.377 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:52.377 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:52.637 /dev/nbd1 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:52.637 1+0 records in 00:17:52.637 1+0 records out 00:17:52.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287308 s, 14.3 MB/s 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:52.637 18:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:52.900 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:52.900 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:52.900 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:52.900 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:52.900 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:52.900 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:52.900 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:53.163 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.422 [2024-12-06 18:15:05.554900] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:53.422 [2024-12-06 18:15:05.555023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.422 [2024-12-06 18:15:05.555080] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:53.422 [2024-12-06 18:15:05.555135] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.422 [2024-12-06 18:15:05.557895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.422 [2024-12-06 18:15:05.557981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:53.422 [2024-12-06 18:15:05.558162] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:53.422 [2024-12-06 18:15:05.558254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.422 [2024-12-06 18:15:05.558507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.422 [2024-12-06 18:15:05.558669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:53.422 [2024-12-06 18:15:05.558809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:53.422 spare 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.422 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.682 [2024-12-06 18:15:05.658842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:53.682 [2024-12-06 18:15:05.658994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:53.682 [2024-12-06 18:15:05.659447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:53.682 [2024-12-06 18:15:05.669066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:53.682 [2024-12-06 18:15:05.669155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:53.682 [2024-12-06 18:15:05.669473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.682 "name": "raid_bdev1", 00:17:53.682 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:53.682 "strip_size_kb": 64, 00:17:53.682 "state": "online", 00:17:53.682 "raid_level": "raid5f", 00:17:53.682 "superblock": true, 00:17:53.682 "num_base_bdevs": 4, 00:17:53.682 "num_base_bdevs_discovered": 4, 00:17:53.682 "num_base_bdevs_operational": 4, 00:17:53.682 "base_bdevs_list": [ 00:17:53.682 { 00:17:53.682 "name": "spare", 00:17:53.682 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:53.682 "is_configured": true, 00:17:53.682 "data_offset": 2048, 00:17:53.682 "data_size": 63488 00:17:53.682 }, 00:17:53.682 { 00:17:53.682 "name": "BaseBdev2", 00:17:53.682 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:53.682 "is_configured": true, 00:17:53.682 "data_offset": 2048, 00:17:53.682 "data_size": 63488 00:17:53.682 }, 00:17:53.682 { 00:17:53.682 "name": "BaseBdev3", 00:17:53.682 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:53.682 "is_configured": true, 00:17:53.682 "data_offset": 2048, 00:17:53.682 "data_size": 63488 00:17:53.682 }, 00:17:53.682 { 00:17:53.682 "name": "BaseBdev4", 00:17:53.682 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:53.682 "is_configured": true, 00:17:53.682 "data_offset": 2048, 00:17:53.682 "data_size": 63488 00:17:53.682 } 00:17:53.682 ] 00:17:53.682 }' 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.682 18:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.251 "name": "raid_bdev1", 00:17:54.251 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:54.251 "strip_size_kb": 64, 00:17:54.251 "state": "online", 00:17:54.251 "raid_level": "raid5f", 00:17:54.251 "superblock": true, 00:17:54.251 "num_base_bdevs": 4, 00:17:54.251 "num_base_bdevs_discovered": 4, 00:17:54.251 "num_base_bdevs_operational": 4, 00:17:54.251 "base_bdevs_list": [ 00:17:54.251 { 00:17:54.251 "name": "spare", 00:17:54.251 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:54.251 "is_configured": true, 00:17:54.251 "data_offset": 2048, 00:17:54.251 "data_size": 63488 00:17:54.251 }, 00:17:54.251 { 00:17:54.251 "name": "BaseBdev2", 00:17:54.251 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:54.251 "is_configured": true, 00:17:54.251 "data_offset": 2048, 00:17:54.251 "data_size": 63488 00:17:54.251 }, 00:17:54.251 { 00:17:54.251 "name": "BaseBdev3", 00:17:54.251 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:54.251 "is_configured": true, 00:17:54.251 "data_offset": 2048, 00:17:54.251 "data_size": 63488 00:17:54.251 }, 00:17:54.251 { 00:17:54.251 "name": "BaseBdev4", 00:17:54.251 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:54.251 "is_configured": true, 00:17:54.251 "data_offset": 2048, 00:17:54.251 "data_size": 63488 00:17:54.251 } 00:17:54.251 ] 00:17:54.251 }' 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.251 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.252 [2024-12-06 18:15:06.310487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.252 "name": "raid_bdev1", 00:17:54.252 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:54.252 "strip_size_kb": 64, 00:17:54.252 "state": "online", 00:17:54.252 "raid_level": "raid5f", 00:17:54.252 "superblock": true, 00:17:54.252 "num_base_bdevs": 4, 00:17:54.252 "num_base_bdevs_discovered": 3, 00:17:54.252 "num_base_bdevs_operational": 3, 00:17:54.252 "base_bdevs_list": [ 00:17:54.252 { 00:17:54.252 "name": null, 00:17:54.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.252 "is_configured": false, 00:17:54.252 "data_offset": 0, 00:17:54.252 "data_size": 63488 00:17:54.252 }, 00:17:54.252 { 00:17:54.252 "name": "BaseBdev2", 00:17:54.252 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:54.252 "is_configured": true, 00:17:54.252 "data_offset": 2048, 00:17:54.252 "data_size": 63488 00:17:54.252 }, 00:17:54.252 { 00:17:54.252 "name": "BaseBdev3", 00:17:54.252 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:54.252 "is_configured": true, 00:17:54.252 "data_offset": 2048, 00:17:54.252 "data_size": 63488 00:17:54.252 }, 00:17:54.252 { 00:17:54.252 "name": "BaseBdev4", 00:17:54.252 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:54.252 "is_configured": true, 00:17:54.252 "data_offset": 2048, 00:17:54.252 "data_size": 63488 00:17:54.252 } 00:17:54.252 ] 00:17:54.252 }' 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.252 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.819 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:54.819 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.819 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.819 [2024-12-06 18:15:06.817709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.819 [2024-12-06 18:15:06.818025] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:54.820 [2024-12-06 18:15:06.818131] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:54.820 [2024-12-06 18:15:06.818210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.820 [2024-12-06 18:15:06.835950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:54.820 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.820 18:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:54.820 [2024-12-06 18:15:06.846613] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.757 "name": "raid_bdev1", 00:17:55.757 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:55.757 "strip_size_kb": 64, 00:17:55.757 "state": "online", 00:17:55.757 "raid_level": "raid5f", 00:17:55.757 "superblock": true, 00:17:55.757 "num_base_bdevs": 4, 00:17:55.757 "num_base_bdevs_discovered": 4, 00:17:55.757 "num_base_bdevs_operational": 4, 00:17:55.757 "process": { 00:17:55.757 "type": "rebuild", 00:17:55.757 "target": "spare", 00:17:55.757 "progress": { 00:17:55.757 "blocks": 17280, 00:17:55.757 "percent": 9 00:17:55.757 } 00:17:55.757 }, 00:17:55.757 "base_bdevs_list": [ 00:17:55.757 { 00:17:55.757 "name": "spare", 00:17:55.757 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:55.757 "is_configured": true, 00:17:55.757 "data_offset": 2048, 00:17:55.757 "data_size": 63488 00:17:55.757 }, 00:17:55.757 { 00:17:55.757 "name": "BaseBdev2", 00:17:55.757 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:55.757 "is_configured": true, 00:17:55.757 "data_offset": 2048, 00:17:55.757 "data_size": 63488 00:17:55.757 }, 00:17:55.757 { 00:17:55.757 "name": "BaseBdev3", 00:17:55.757 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:55.757 "is_configured": true, 00:17:55.757 "data_offset": 2048, 00:17:55.757 "data_size": 63488 00:17:55.757 }, 00:17:55.757 { 00:17:55.757 "name": "BaseBdev4", 00:17:55.757 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:55.757 "is_configured": true, 00:17:55.757 "data_offset": 2048, 00:17:55.757 "data_size": 63488 00:17:55.757 } 00:17:55.757 ] 00:17:55.757 }' 00:17:55.757 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.016 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.016 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.016 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.016 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:56.016 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.016 18:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.016 [2024-12-06 18:15:07.982054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.016 [2024-12-06 18:15:08.056592] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:56.016 [2024-12-06 18:15:08.056792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.016 [2024-12-06 18:15:08.056851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.016 [2024-12-06 18:15:08.056886] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.016 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.016 "name": "raid_bdev1", 00:17:56.016 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:56.016 "strip_size_kb": 64, 00:17:56.016 "state": "online", 00:17:56.016 "raid_level": "raid5f", 00:17:56.016 "superblock": true, 00:17:56.016 "num_base_bdevs": 4, 00:17:56.016 "num_base_bdevs_discovered": 3, 00:17:56.016 "num_base_bdevs_operational": 3, 00:17:56.016 "base_bdevs_list": [ 00:17:56.016 { 00:17:56.016 "name": null, 00:17:56.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.016 "is_configured": false, 00:17:56.016 "data_offset": 0, 00:17:56.016 "data_size": 63488 00:17:56.016 }, 00:17:56.016 { 00:17:56.016 "name": "BaseBdev2", 00:17:56.016 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:56.017 "is_configured": true, 00:17:56.017 "data_offset": 2048, 00:17:56.017 "data_size": 63488 00:17:56.017 }, 00:17:56.017 { 00:17:56.017 "name": "BaseBdev3", 00:17:56.017 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:56.017 "is_configured": true, 00:17:56.017 "data_offset": 2048, 00:17:56.017 "data_size": 63488 00:17:56.017 }, 00:17:56.017 { 00:17:56.017 "name": "BaseBdev4", 00:17:56.017 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:56.017 "is_configured": true, 00:17:56.017 "data_offset": 2048, 00:17:56.017 "data_size": 63488 00:17:56.017 } 00:17:56.017 ] 00:17:56.017 }' 00:17:56.017 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.017 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.585 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:56.585 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.585 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.585 [2024-12-06 18:15:08.585859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:56.585 [2024-12-06 18:15:08.585940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.585 [2024-12-06 18:15:08.585971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:56.585 [2024-12-06 18:15:08.585986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.585 [2024-12-06 18:15:08.586616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.585 [2024-12-06 18:15:08.586660] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:56.585 [2024-12-06 18:15:08.586783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:56.585 [2024-12-06 18:15:08.586801] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:56.585 [2024-12-06 18:15:08.586813] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:56.585 [2024-12-06 18:15:08.586847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.585 [2024-12-06 18:15:08.605393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:56.585 spare 00:17:56.585 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.585 18:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:56.585 [2024-12-06 18:15:08.616742] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.584 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.584 "name": "raid_bdev1", 00:17:57.584 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:57.584 "strip_size_kb": 64, 00:17:57.584 "state": "online", 00:17:57.584 "raid_level": "raid5f", 00:17:57.584 "superblock": true, 00:17:57.584 "num_base_bdevs": 4, 00:17:57.584 "num_base_bdevs_discovered": 4, 00:17:57.584 "num_base_bdevs_operational": 4, 00:17:57.584 "process": { 00:17:57.584 "type": "rebuild", 00:17:57.584 "target": "spare", 00:17:57.584 "progress": { 00:17:57.584 "blocks": 19200, 00:17:57.584 "percent": 10 00:17:57.584 } 00:17:57.584 }, 00:17:57.584 "base_bdevs_list": [ 00:17:57.584 { 00:17:57.584 "name": "spare", 00:17:57.584 "uuid": "9bfef878-88fc-5552-a061-f917ed327ede", 00:17:57.584 "is_configured": true, 00:17:57.584 "data_offset": 2048, 00:17:57.584 "data_size": 63488 00:17:57.584 }, 00:17:57.584 { 00:17:57.584 "name": "BaseBdev2", 00:17:57.585 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:57.585 "is_configured": true, 00:17:57.585 "data_offset": 2048, 00:17:57.585 "data_size": 63488 00:17:57.585 }, 00:17:57.585 { 00:17:57.585 "name": "BaseBdev3", 00:17:57.585 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:57.585 "is_configured": true, 00:17:57.585 "data_offset": 2048, 00:17:57.585 "data_size": 63488 00:17:57.585 }, 00:17:57.585 { 00:17:57.585 "name": "BaseBdev4", 00:17:57.585 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:57.585 "is_configured": true, 00:17:57.585 "data_offset": 2048, 00:17:57.585 "data_size": 63488 00:17:57.585 } 00:17:57.585 ] 00:17:57.585 }' 00:17:57.585 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.585 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.585 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.861 [2024-12-06 18:15:09.768528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.861 [2024-12-06 18:15:09.826479] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:57.861 [2024-12-06 18:15:09.826581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.861 [2024-12-06 18:15:09.826603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.861 [2024-12-06 18:15:09.826611] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.861 "name": "raid_bdev1", 00:17:57.861 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:57.861 "strip_size_kb": 64, 00:17:57.861 "state": "online", 00:17:57.861 "raid_level": "raid5f", 00:17:57.861 "superblock": true, 00:17:57.861 "num_base_bdevs": 4, 00:17:57.861 "num_base_bdevs_discovered": 3, 00:17:57.861 "num_base_bdevs_operational": 3, 00:17:57.861 "base_bdevs_list": [ 00:17:57.861 { 00:17:57.861 "name": null, 00:17:57.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.861 "is_configured": false, 00:17:57.861 "data_offset": 0, 00:17:57.861 "data_size": 63488 00:17:57.861 }, 00:17:57.861 { 00:17:57.861 "name": "BaseBdev2", 00:17:57.861 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:57.861 "is_configured": true, 00:17:57.861 "data_offset": 2048, 00:17:57.861 "data_size": 63488 00:17:57.861 }, 00:17:57.861 { 00:17:57.861 "name": "BaseBdev3", 00:17:57.861 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:57.861 "is_configured": true, 00:17:57.861 "data_offset": 2048, 00:17:57.861 "data_size": 63488 00:17:57.861 }, 00:17:57.861 { 00:17:57.861 "name": "BaseBdev4", 00:17:57.861 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:57.861 "is_configured": true, 00:17:57.861 "data_offset": 2048, 00:17:57.861 "data_size": 63488 00:17:57.861 } 00:17:57.861 ] 00:17:57.861 }' 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.861 18:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.119 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.119 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.119 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.119 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.119 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.119 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.119 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.119 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.378 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.378 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.378 "name": "raid_bdev1", 00:17:58.378 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:58.378 "strip_size_kb": 64, 00:17:58.378 "state": "online", 00:17:58.378 "raid_level": "raid5f", 00:17:58.378 "superblock": true, 00:17:58.378 "num_base_bdevs": 4, 00:17:58.378 "num_base_bdevs_discovered": 3, 00:17:58.378 "num_base_bdevs_operational": 3, 00:17:58.378 "base_bdevs_list": [ 00:17:58.378 { 00:17:58.378 "name": null, 00:17:58.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.378 "is_configured": false, 00:17:58.378 "data_offset": 0, 00:17:58.378 "data_size": 63488 00:17:58.378 }, 00:17:58.378 { 00:17:58.378 "name": "BaseBdev2", 00:17:58.378 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:58.378 "is_configured": true, 00:17:58.378 "data_offset": 2048, 00:17:58.378 "data_size": 63488 00:17:58.379 }, 00:17:58.379 { 00:17:58.379 "name": "BaseBdev3", 00:17:58.379 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:58.379 "is_configured": true, 00:17:58.379 "data_offset": 2048, 00:17:58.379 "data_size": 63488 00:17:58.379 }, 00:17:58.379 { 00:17:58.379 "name": "BaseBdev4", 00:17:58.379 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:58.379 "is_configured": true, 00:17:58.379 "data_offset": 2048, 00:17:58.379 "data_size": 63488 00:17:58.379 } 00:17:58.379 ] 00:17:58.379 }' 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.379 [2024-12-06 18:15:10.449151] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:58.379 [2024-12-06 18:15:10.449220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.379 [2024-12-06 18:15:10.449244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:58.379 [2024-12-06 18:15:10.449270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.379 [2024-12-06 18:15:10.449821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.379 [2024-12-06 18:15:10.449857] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:58.379 [2024-12-06 18:15:10.449977] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:58.379 [2024-12-06 18:15:10.449995] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:58.379 [2024-12-06 18:15:10.450010] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:58.379 [2024-12-06 18:15:10.450023] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:58.379 BaseBdev1 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.379 18:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:59.313 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:59.313 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.313 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.313 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.314 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.572 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.572 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.572 "name": "raid_bdev1", 00:17:59.572 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:59.572 "strip_size_kb": 64, 00:17:59.572 "state": "online", 00:17:59.572 "raid_level": "raid5f", 00:17:59.572 "superblock": true, 00:17:59.572 "num_base_bdevs": 4, 00:17:59.572 "num_base_bdevs_discovered": 3, 00:17:59.572 "num_base_bdevs_operational": 3, 00:17:59.572 "base_bdevs_list": [ 00:17:59.572 { 00:17:59.572 "name": null, 00:17:59.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.572 "is_configured": false, 00:17:59.572 "data_offset": 0, 00:17:59.572 "data_size": 63488 00:17:59.572 }, 00:17:59.572 { 00:17:59.572 "name": "BaseBdev2", 00:17:59.572 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:59.572 "is_configured": true, 00:17:59.572 "data_offset": 2048, 00:17:59.572 "data_size": 63488 00:17:59.572 }, 00:17:59.572 { 00:17:59.572 "name": "BaseBdev3", 00:17:59.572 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:59.572 "is_configured": true, 00:17:59.572 "data_offset": 2048, 00:17:59.572 "data_size": 63488 00:17:59.572 }, 00:17:59.572 { 00:17:59.572 "name": "BaseBdev4", 00:17:59.572 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:59.572 "is_configured": true, 00:17:59.572 "data_offset": 2048, 00:17:59.572 "data_size": 63488 00:17:59.572 } 00:17:59.572 ] 00:17:59.572 }' 00:17:59.572 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.572 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.831 "name": "raid_bdev1", 00:17:59.831 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:17:59.831 "strip_size_kb": 64, 00:17:59.831 "state": "online", 00:17:59.831 "raid_level": "raid5f", 00:17:59.831 "superblock": true, 00:17:59.831 "num_base_bdevs": 4, 00:17:59.831 "num_base_bdevs_discovered": 3, 00:17:59.831 "num_base_bdevs_operational": 3, 00:17:59.831 "base_bdevs_list": [ 00:17:59.831 { 00:17:59.831 "name": null, 00:17:59.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.831 "is_configured": false, 00:17:59.831 "data_offset": 0, 00:17:59.831 "data_size": 63488 00:17:59.831 }, 00:17:59.831 { 00:17:59.831 "name": "BaseBdev2", 00:17:59.831 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:17:59.831 "is_configured": true, 00:17:59.831 "data_offset": 2048, 00:17:59.831 "data_size": 63488 00:17:59.831 }, 00:17:59.831 { 00:17:59.831 "name": "BaseBdev3", 00:17:59.831 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:17:59.831 "is_configured": true, 00:17:59.831 "data_offset": 2048, 00:17:59.831 "data_size": 63488 00:17:59.831 }, 00:17:59.831 { 00:17:59.831 "name": "BaseBdev4", 00:17:59.831 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:17:59.831 "is_configured": true, 00:17:59.831 "data_offset": 2048, 00:17:59.831 "data_size": 63488 00:17:59.831 } 00:17:59.831 ] 00:17:59.831 }' 00:17:59.831 18:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.090 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.090 [2024-12-06 18:15:12.090550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.090 [2024-12-06 18:15:12.090767] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:00.090 [2024-12-06 18:15:12.090786] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:00.090 request: 00:18:00.090 { 00:18:00.090 "base_bdev": "BaseBdev1", 00:18:00.090 "raid_bdev": "raid_bdev1", 00:18:00.090 "method": "bdev_raid_add_base_bdev", 00:18:00.091 "req_id": 1 00:18:00.091 } 00:18:00.091 Got JSON-RPC error response 00:18:00.091 response: 00:18:00.091 { 00:18:00.091 "code": -22, 00:18:00.091 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:00.091 } 00:18:00.091 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:00.091 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:00.091 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:00.091 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:00.091 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:00.091 18:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.033 "name": "raid_bdev1", 00:18:01.033 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:18:01.033 "strip_size_kb": 64, 00:18:01.033 "state": "online", 00:18:01.033 "raid_level": "raid5f", 00:18:01.033 "superblock": true, 00:18:01.033 "num_base_bdevs": 4, 00:18:01.033 "num_base_bdevs_discovered": 3, 00:18:01.033 "num_base_bdevs_operational": 3, 00:18:01.033 "base_bdevs_list": [ 00:18:01.033 { 00:18:01.033 "name": null, 00:18:01.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.033 "is_configured": false, 00:18:01.033 "data_offset": 0, 00:18:01.033 "data_size": 63488 00:18:01.033 }, 00:18:01.033 { 00:18:01.033 "name": "BaseBdev2", 00:18:01.033 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:18:01.033 "is_configured": true, 00:18:01.033 "data_offset": 2048, 00:18:01.033 "data_size": 63488 00:18:01.033 }, 00:18:01.033 { 00:18:01.033 "name": "BaseBdev3", 00:18:01.033 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:18:01.033 "is_configured": true, 00:18:01.033 "data_offset": 2048, 00:18:01.033 "data_size": 63488 00:18:01.033 }, 00:18:01.033 { 00:18:01.033 "name": "BaseBdev4", 00:18:01.033 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:18:01.033 "is_configured": true, 00:18:01.033 "data_offset": 2048, 00:18:01.033 "data_size": 63488 00:18:01.033 } 00:18:01.033 ] 00:18:01.033 }' 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.033 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.601 "name": "raid_bdev1", 00:18:01.601 "uuid": "4fbf9ed3-2926-41d0-b16a-722fdc9ea2ac", 00:18:01.601 "strip_size_kb": 64, 00:18:01.601 "state": "online", 00:18:01.601 "raid_level": "raid5f", 00:18:01.601 "superblock": true, 00:18:01.601 "num_base_bdevs": 4, 00:18:01.601 "num_base_bdevs_discovered": 3, 00:18:01.601 "num_base_bdevs_operational": 3, 00:18:01.601 "base_bdevs_list": [ 00:18:01.601 { 00:18:01.601 "name": null, 00:18:01.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.601 "is_configured": false, 00:18:01.601 "data_offset": 0, 00:18:01.601 "data_size": 63488 00:18:01.601 }, 00:18:01.601 { 00:18:01.601 "name": "BaseBdev2", 00:18:01.601 "uuid": "67b48a5e-bbc9-5138-b3b2-6e7936e37c3e", 00:18:01.601 "is_configured": true, 00:18:01.601 "data_offset": 2048, 00:18:01.601 "data_size": 63488 00:18:01.601 }, 00:18:01.601 { 00:18:01.601 "name": "BaseBdev3", 00:18:01.601 "uuid": "0c37ef14-4496-54a1-8d4c-6514b6952250", 00:18:01.601 "is_configured": true, 00:18:01.601 "data_offset": 2048, 00:18:01.601 "data_size": 63488 00:18:01.601 }, 00:18:01.601 { 00:18:01.601 "name": "BaseBdev4", 00:18:01.601 "uuid": "5cf3e18e-3bb5-5a1c-b826-c7487f68e701", 00:18:01.601 "is_configured": true, 00:18:01.601 "data_offset": 2048, 00:18:01.601 "data_size": 63488 00:18:01.601 } 00:18:01.601 ] 00:18:01.601 }' 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85693 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85693 ']' 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85693 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85693 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85693' 00:18:01.601 killing process with pid 85693 00:18:01.601 Received shutdown signal, test time was about 60.000000 seconds 00:18:01.601 00:18:01.601 Latency(us) 00:18:01.601 [2024-12-06T18:15:13.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.601 [2024-12-06T18:15:13.769Z] =================================================================================================================== 00:18:01.601 [2024-12-06T18:15:13.769Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85693 00:18:01.601 [2024-12-06 18:15:13.762335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.601 18:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85693 00:18:01.601 [2024-12-06 18:15:13.762488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.601 [2024-12-06 18:15:13.762589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.601 [2024-12-06 18:15:13.762606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:02.174 [2024-12-06 18:15:14.315160] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.571 18:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:03.571 00:18:03.571 real 0m27.804s 00:18:03.571 user 0m35.031s 00:18:03.571 sys 0m3.174s 00:18:03.571 18:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.571 ************************************ 00:18:03.571 END TEST raid5f_rebuild_test_sb 00:18:03.571 ************************************ 00:18:03.571 18:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 18:15:15 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:03.571 18:15:15 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:03.571 18:15:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:03.571 18:15:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.571 18:15:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 ************************************ 00:18:03.571 START TEST raid_state_function_test_sb_4k 00:18:03.571 ************************************ 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86510 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86510' 00:18:03.571 Process raid pid: 86510 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86510 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86510 ']' 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.571 18:15:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 [2024-12-06 18:15:15.702962] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:18:03.571 [2024-12-06 18:15:15.703140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.831 [2024-12-06 18:15:15.896390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.091 [2024-12-06 18:15:16.016890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.091 [2024-12-06 18:15:16.224987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.091 [2024-12-06 18:15:16.225029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.680 [2024-12-06 18:15:16.565614] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.680 [2024-12-06 18:15:16.565677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.680 [2024-12-06 18:15:16.565688] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.680 [2024-12-06 18:15:16.565698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.680 "name": "Existed_Raid", 00:18:04.680 "uuid": "62323b5c-3629-4715-aaed-9fdcc22f4e76", 00:18:04.680 "strip_size_kb": 0, 00:18:04.680 "state": "configuring", 00:18:04.680 "raid_level": "raid1", 00:18:04.680 "superblock": true, 00:18:04.680 "num_base_bdevs": 2, 00:18:04.680 "num_base_bdevs_discovered": 0, 00:18:04.680 "num_base_bdevs_operational": 2, 00:18:04.680 "base_bdevs_list": [ 00:18:04.680 { 00:18:04.680 "name": "BaseBdev1", 00:18:04.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.680 "is_configured": false, 00:18:04.680 "data_offset": 0, 00:18:04.680 "data_size": 0 00:18:04.680 }, 00:18:04.680 { 00:18:04.680 "name": "BaseBdev2", 00:18:04.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.680 "is_configured": false, 00:18:04.680 "data_offset": 0, 00:18:04.680 "data_size": 0 00:18:04.680 } 00:18:04.680 ] 00:18:04.680 }' 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.680 18:15:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.939 [2024-12-06 18:15:17.028780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.939 [2024-12-06 18:15:17.028898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.939 [2024-12-06 18:15:17.040728] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.939 [2024-12-06 18:15:17.040859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.939 [2024-12-06 18:15:17.040892] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.939 [2024-12-06 18:15:17.040921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.939 [2024-12-06 18:15:17.086103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.939 BaseBdev1 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.939 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.200 [ 00:18:05.200 { 00:18:05.200 "name": "BaseBdev1", 00:18:05.200 "aliases": [ 00:18:05.200 "41d3bf15-4e63-44fa-a880-36d022103cbc" 00:18:05.200 ], 00:18:05.200 "product_name": "Malloc disk", 00:18:05.200 "block_size": 4096, 00:18:05.200 "num_blocks": 8192, 00:18:05.200 "uuid": "41d3bf15-4e63-44fa-a880-36d022103cbc", 00:18:05.200 "assigned_rate_limits": { 00:18:05.200 "rw_ios_per_sec": 0, 00:18:05.200 "rw_mbytes_per_sec": 0, 00:18:05.200 "r_mbytes_per_sec": 0, 00:18:05.200 "w_mbytes_per_sec": 0 00:18:05.200 }, 00:18:05.200 "claimed": true, 00:18:05.200 "claim_type": "exclusive_write", 00:18:05.200 "zoned": false, 00:18:05.200 "supported_io_types": { 00:18:05.200 "read": true, 00:18:05.200 "write": true, 00:18:05.200 "unmap": true, 00:18:05.200 "flush": true, 00:18:05.200 "reset": true, 00:18:05.200 "nvme_admin": false, 00:18:05.200 "nvme_io": false, 00:18:05.200 "nvme_io_md": false, 00:18:05.200 "write_zeroes": true, 00:18:05.200 "zcopy": true, 00:18:05.200 "get_zone_info": false, 00:18:05.200 "zone_management": false, 00:18:05.200 "zone_append": false, 00:18:05.200 "compare": false, 00:18:05.200 "compare_and_write": false, 00:18:05.200 "abort": true, 00:18:05.200 "seek_hole": false, 00:18:05.200 "seek_data": false, 00:18:05.200 "copy": true, 00:18:05.200 "nvme_iov_md": false 00:18:05.200 }, 00:18:05.200 "memory_domains": [ 00:18:05.200 { 00:18:05.200 "dma_device_id": "system", 00:18:05.201 "dma_device_type": 1 00:18:05.201 }, 00:18:05.201 { 00:18:05.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.201 "dma_device_type": 2 00:18:05.201 } 00:18:05.201 ], 00:18:05.201 "driver_specific": {} 00:18:05.201 } 00:18:05.201 ] 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.201 "name": "Existed_Raid", 00:18:05.201 "uuid": "5c373256-9280-46e7-85a1-667bf32b3fd8", 00:18:05.201 "strip_size_kb": 0, 00:18:05.201 "state": "configuring", 00:18:05.201 "raid_level": "raid1", 00:18:05.201 "superblock": true, 00:18:05.201 "num_base_bdevs": 2, 00:18:05.201 "num_base_bdevs_discovered": 1, 00:18:05.201 "num_base_bdevs_operational": 2, 00:18:05.201 "base_bdevs_list": [ 00:18:05.201 { 00:18:05.201 "name": "BaseBdev1", 00:18:05.201 "uuid": "41d3bf15-4e63-44fa-a880-36d022103cbc", 00:18:05.201 "is_configured": true, 00:18:05.201 "data_offset": 256, 00:18:05.201 "data_size": 7936 00:18:05.201 }, 00:18:05.201 { 00:18:05.201 "name": "BaseBdev2", 00:18:05.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.201 "is_configured": false, 00:18:05.201 "data_offset": 0, 00:18:05.201 "data_size": 0 00:18:05.201 } 00:18:05.201 ] 00:18:05.201 }' 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.201 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.518 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:05.518 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.518 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.518 [2024-12-06 18:15:17.545383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:05.518 [2024-12-06 18:15:17.545439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:05.518 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.518 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:05.518 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.518 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.518 [2024-12-06 18:15:17.557399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.518 [2024-12-06 18:15:17.559313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.518 [2024-12-06 18:15:17.559356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.518 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.519 "name": "Existed_Raid", 00:18:05.519 "uuid": "46539f4f-aa7c-4c88-99c4-0eb4a7646933", 00:18:05.519 "strip_size_kb": 0, 00:18:05.519 "state": "configuring", 00:18:05.519 "raid_level": "raid1", 00:18:05.519 "superblock": true, 00:18:05.519 "num_base_bdevs": 2, 00:18:05.519 "num_base_bdevs_discovered": 1, 00:18:05.519 "num_base_bdevs_operational": 2, 00:18:05.519 "base_bdevs_list": [ 00:18:05.519 { 00:18:05.519 "name": "BaseBdev1", 00:18:05.519 "uuid": "41d3bf15-4e63-44fa-a880-36d022103cbc", 00:18:05.519 "is_configured": true, 00:18:05.519 "data_offset": 256, 00:18:05.519 "data_size": 7936 00:18:05.519 }, 00:18:05.519 { 00:18:05.519 "name": "BaseBdev2", 00:18:05.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.519 "is_configured": false, 00:18:05.519 "data_offset": 0, 00:18:05.519 "data_size": 0 00:18:05.519 } 00:18:05.519 ] 00:18:05.519 }' 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.519 18:15:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.097 [2024-12-06 18:15:18.064299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.097 [2024-12-06 18:15:18.064705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:06.097 [2024-12-06 18:15:18.064779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.097 [2024-12-06 18:15:18.065102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:06.097 BaseBdev2 00:18:06.097 [2024-12-06 18:15:18.065331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:06.097 [2024-12-06 18:15:18.065349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:06.097 [2024-12-06 18:15:18.065527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.097 [ 00:18:06.097 { 00:18:06.097 "name": "BaseBdev2", 00:18:06.097 "aliases": [ 00:18:06.097 "5ed617e9-cd8e-47a9-b079-34674e3cbbd8" 00:18:06.097 ], 00:18:06.097 "product_name": "Malloc disk", 00:18:06.097 "block_size": 4096, 00:18:06.097 "num_blocks": 8192, 00:18:06.097 "uuid": "5ed617e9-cd8e-47a9-b079-34674e3cbbd8", 00:18:06.097 "assigned_rate_limits": { 00:18:06.097 "rw_ios_per_sec": 0, 00:18:06.097 "rw_mbytes_per_sec": 0, 00:18:06.097 "r_mbytes_per_sec": 0, 00:18:06.097 "w_mbytes_per_sec": 0 00:18:06.097 }, 00:18:06.097 "claimed": true, 00:18:06.097 "claim_type": "exclusive_write", 00:18:06.097 "zoned": false, 00:18:06.097 "supported_io_types": { 00:18:06.097 "read": true, 00:18:06.097 "write": true, 00:18:06.097 "unmap": true, 00:18:06.097 "flush": true, 00:18:06.097 "reset": true, 00:18:06.097 "nvme_admin": false, 00:18:06.097 "nvme_io": false, 00:18:06.097 "nvme_io_md": false, 00:18:06.097 "write_zeroes": true, 00:18:06.097 "zcopy": true, 00:18:06.097 "get_zone_info": false, 00:18:06.097 "zone_management": false, 00:18:06.097 "zone_append": false, 00:18:06.097 "compare": false, 00:18:06.097 "compare_and_write": false, 00:18:06.097 "abort": true, 00:18:06.097 "seek_hole": false, 00:18:06.097 "seek_data": false, 00:18:06.097 "copy": true, 00:18:06.097 "nvme_iov_md": false 00:18:06.097 }, 00:18:06.097 "memory_domains": [ 00:18:06.097 { 00:18:06.097 "dma_device_id": "system", 00:18:06.097 "dma_device_type": 1 00:18:06.097 }, 00:18:06.097 { 00:18:06.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.097 "dma_device_type": 2 00:18:06.097 } 00:18:06.097 ], 00:18:06.097 "driver_specific": {} 00:18:06.097 } 00:18:06.097 ] 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.097 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.097 "name": "Existed_Raid", 00:18:06.097 "uuid": "46539f4f-aa7c-4c88-99c4-0eb4a7646933", 00:18:06.097 "strip_size_kb": 0, 00:18:06.097 "state": "online", 00:18:06.097 "raid_level": "raid1", 00:18:06.097 "superblock": true, 00:18:06.097 "num_base_bdevs": 2, 00:18:06.097 "num_base_bdevs_discovered": 2, 00:18:06.097 "num_base_bdevs_operational": 2, 00:18:06.097 "base_bdevs_list": [ 00:18:06.098 { 00:18:06.098 "name": "BaseBdev1", 00:18:06.098 "uuid": "41d3bf15-4e63-44fa-a880-36d022103cbc", 00:18:06.098 "is_configured": true, 00:18:06.098 "data_offset": 256, 00:18:06.098 "data_size": 7936 00:18:06.098 }, 00:18:06.098 { 00:18:06.098 "name": "BaseBdev2", 00:18:06.098 "uuid": "5ed617e9-cd8e-47a9-b079-34674e3cbbd8", 00:18:06.098 "is_configured": true, 00:18:06.098 "data_offset": 256, 00:18:06.098 "data_size": 7936 00:18:06.098 } 00:18:06.098 ] 00:18:06.098 }' 00:18:06.098 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.098 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.668 [2024-12-06 18:15:18.583814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.668 "name": "Existed_Raid", 00:18:06.668 "aliases": [ 00:18:06.668 "46539f4f-aa7c-4c88-99c4-0eb4a7646933" 00:18:06.668 ], 00:18:06.668 "product_name": "Raid Volume", 00:18:06.668 "block_size": 4096, 00:18:06.668 "num_blocks": 7936, 00:18:06.668 "uuid": "46539f4f-aa7c-4c88-99c4-0eb4a7646933", 00:18:06.668 "assigned_rate_limits": { 00:18:06.668 "rw_ios_per_sec": 0, 00:18:06.668 "rw_mbytes_per_sec": 0, 00:18:06.668 "r_mbytes_per_sec": 0, 00:18:06.668 "w_mbytes_per_sec": 0 00:18:06.668 }, 00:18:06.668 "claimed": false, 00:18:06.668 "zoned": false, 00:18:06.668 "supported_io_types": { 00:18:06.668 "read": true, 00:18:06.668 "write": true, 00:18:06.668 "unmap": false, 00:18:06.668 "flush": false, 00:18:06.668 "reset": true, 00:18:06.668 "nvme_admin": false, 00:18:06.668 "nvme_io": false, 00:18:06.668 "nvme_io_md": false, 00:18:06.668 "write_zeroes": true, 00:18:06.668 "zcopy": false, 00:18:06.668 "get_zone_info": false, 00:18:06.668 "zone_management": false, 00:18:06.668 "zone_append": false, 00:18:06.668 "compare": false, 00:18:06.668 "compare_and_write": false, 00:18:06.668 "abort": false, 00:18:06.668 "seek_hole": false, 00:18:06.668 "seek_data": false, 00:18:06.668 "copy": false, 00:18:06.668 "nvme_iov_md": false 00:18:06.668 }, 00:18:06.668 "memory_domains": [ 00:18:06.668 { 00:18:06.668 "dma_device_id": "system", 00:18:06.668 "dma_device_type": 1 00:18:06.668 }, 00:18:06.668 { 00:18:06.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.668 "dma_device_type": 2 00:18:06.668 }, 00:18:06.668 { 00:18:06.668 "dma_device_id": "system", 00:18:06.668 "dma_device_type": 1 00:18:06.668 }, 00:18:06.668 { 00:18:06.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.668 "dma_device_type": 2 00:18:06.668 } 00:18:06.668 ], 00:18:06.668 "driver_specific": { 00:18:06.668 "raid": { 00:18:06.668 "uuid": "46539f4f-aa7c-4c88-99c4-0eb4a7646933", 00:18:06.668 "strip_size_kb": 0, 00:18:06.668 "state": "online", 00:18:06.668 "raid_level": "raid1", 00:18:06.668 "superblock": true, 00:18:06.668 "num_base_bdevs": 2, 00:18:06.668 "num_base_bdevs_discovered": 2, 00:18:06.668 "num_base_bdevs_operational": 2, 00:18:06.668 "base_bdevs_list": [ 00:18:06.668 { 00:18:06.668 "name": "BaseBdev1", 00:18:06.668 "uuid": "41d3bf15-4e63-44fa-a880-36d022103cbc", 00:18:06.668 "is_configured": true, 00:18:06.668 "data_offset": 256, 00:18:06.668 "data_size": 7936 00:18:06.668 }, 00:18:06.668 { 00:18:06.668 "name": "BaseBdev2", 00:18:06.668 "uuid": "5ed617e9-cd8e-47a9-b079-34674e3cbbd8", 00:18:06.668 "is_configured": true, 00:18:06.668 "data_offset": 256, 00:18:06.668 "data_size": 7936 00:18:06.668 } 00:18:06.668 ] 00:18:06.668 } 00:18:06.668 } 00:18:06.668 }' 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:06.668 BaseBdev2' 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:06.668 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.669 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.929 [2024-12-06 18:15:18.835119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.929 "name": "Existed_Raid", 00:18:06.929 "uuid": "46539f4f-aa7c-4c88-99c4-0eb4a7646933", 00:18:06.929 "strip_size_kb": 0, 00:18:06.929 "state": "online", 00:18:06.929 "raid_level": "raid1", 00:18:06.929 "superblock": true, 00:18:06.929 "num_base_bdevs": 2, 00:18:06.929 "num_base_bdevs_discovered": 1, 00:18:06.929 "num_base_bdevs_operational": 1, 00:18:06.929 "base_bdevs_list": [ 00:18:06.929 { 00:18:06.929 "name": null, 00:18:06.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.929 "is_configured": false, 00:18:06.929 "data_offset": 0, 00:18:06.929 "data_size": 7936 00:18:06.929 }, 00:18:06.929 { 00:18:06.929 "name": "BaseBdev2", 00:18:06.929 "uuid": "5ed617e9-cd8e-47a9-b079-34674e3cbbd8", 00:18:06.929 "is_configured": true, 00:18:06.929 "data_offset": 256, 00:18:06.929 "data_size": 7936 00:18:06.929 } 00:18:06.929 ] 00:18:06.929 }' 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.929 18:15:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.496 [2024-12-06 18:15:19.442513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:07.496 [2024-12-06 18:15:19.442686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.496 [2024-12-06 18:15:19.542614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.496 [2024-12-06 18:15:19.542757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.496 [2024-12-06 18:15:19.542799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:07.496 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86510 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86510 ']' 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86510 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86510 00:18:07.497 killing process with pid 86510 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86510' 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86510 00:18:07.497 [2024-12-06 18:15:19.642032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.497 18:15:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86510 00:18:07.497 [2024-12-06 18:15:19.660167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.875 18:15:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:08.875 ************************************ 00:18:08.875 END TEST raid_state_function_test_sb_4k 00:18:08.875 ************************************ 00:18:08.875 00:18:08.875 real 0m5.219s 00:18:08.875 user 0m7.519s 00:18:08.875 sys 0m0.913s 00:18:08.875 18:15:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.875 18:15:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.875 18:15:20 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:08.875 18:15:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:08.875 18:15:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.875 18:15:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.875 ************************************ 00:18:08.875 START TEST raid_superblock_test_4k 00:18:08.875 ************************************ 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:08.875 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86762 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86762 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86762 ']' 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.876 18:15:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.876 [2024-12-06 18:15:20.950993] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:18:08.876 [2024-12-06 18:15:20.951192] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86762 ] 00:18:09.135 [2024-12-06 18:15:21.122804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.135 [2024-12-06 18:15:21.241720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.394 [2024-12-06 18:15:21.449186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.394 [2024-12-06 18:15:21.449314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.654 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.915 malloc1 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.915 [2024-12-06 18:15:21.861431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.915 [2024-12-06 18:15:21.861553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.915 [2024-12-06 18:15:21.861580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:09.915 [2024-12-06 18:15:21.861590] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.915 [2024-12-06 18:15:21.863810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.915 [2024-12-06 18:15:21.863854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.915 pt1 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.915 malloc2 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.915 [2024-12-06 18:15:21.921962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:09.915 [2024-12-06 18:15:21.922093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.915 [2024-12-06 18:15:21.922139] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:09.915 [2024-12-06 18:15:21.922172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.915 [2024-12-06 18:15:21.924389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.915 [2024-12-06 18:15:21.924466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:09.915 pt2 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.915 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.915 [2024-12-06 18:15:21.933987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.915 [2024-12-06 18:15:21.935886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.915 [2024-12-06 18:15:21.936139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:09.915 [2024-12-06 18:15:21.936195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.915 [2024-12-06 18:15:21.936486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:09.916 [2024-12-06 18:15:21.936694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:09.916 [2024-12-06 18:15:21.936746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:09.916 [2024-12-06 18:15:21.936962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.916 "name": "raid_bdev1", 00:18:09.916 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:09.916 "strip_size_kb": 0, 00:18:09.916 "state": "online", 00:18:09.916 "raid_level": "raid1", 00:18:09.916 "superblock": true, 00:18:09.916 "num_base_bdevs": 2, 00:18:09.916 "num_base_bdevs_discovered": 2, 00:18:09.916 "num_base_bdevs_operational": 2, 00:18:09.916 "base_bdevs_list": [ 00:18:09.916 { 00:18:09.916 "name": "pt1", 00:18:09.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.916 "is_configured": true, 00:18:09.916 "data_offset": 256, 00:18:09.916 "data_size": 7936 00:18:09.916 }, 00:18:09.916 { 00:18:09.916 "name": "pt2", 00:18:09.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.916 "is_configured": true, 00:18:09.916 "data_offset": 256, 00:18:09.916 "data_size": 7936 00:18:09.916 } 00:18:09.916 ] 00:18:09.916 }' 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.916 18:15:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.485 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:10.486 [2024-12-06 18:15:22.409471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:10.486 "name": "raid_bdev1", 00:18:10.486 "aliases": [ 00:18:10.486 "70bd9b52-d650-4b8a-ac02-d7a06611c93c" 00:18:10.486 ], 00:18:10.486 "product_name": "Raid Volume", 00:18:10.486 "block_size": 4096, 00:18:10.486 "num_blocks": 7936, 00:18:10.486 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:10.486 "assigned_rate_limits": { 00:18:10.486 "rw_ios_per_sec": 0, 00:18:10.486 "rw_mbytes_per_sec": 0, 00:18:10.486 "r_mbytes_per_sec": 0, 00:18:10.486 "w_mbytes_per_sec": 0 00:18:10.486 }, 00:18:10.486 "claimed": false, 00:18:10.486 "zoned": false, 00:18:10.486 "supported_io_types": { 00:18:10.486 "read": true, 00:18:10.486 "write": true, 00:18:10.486 "unmap": false, 00:18:10.486 "flush": false, 00:18:10.486 "reset": true, 00:18:10.486 "nvme_admin": false, 00:18:10.486 "nvme_io": false, 00:18:10.486 "nvme_io_md": false, 00:18:10.486 "write_zeroes": true, 00:18:10.486 "zcopy": false, 00:18:10.486 "get_zone_info": false, 00:18:10.486 "zone_management": false, 00:18:10.486 "zone_append": false, 00:18:10.486 "compare": false, 00:18:10.486 "compare_and_write": false, 00:18:10.486 "abort": false, 00:18:10.486 "seek_hole": false, 00:18:10.486 "seek_data": false, 00:18:10.486 "copy": false, 00:18:10.486 "nvme_iov_md": false 00:18:10.486 }, 00:18:10.486 "memory_domains": [ 00:18:10.486 { 00:18:10.486 "dma_device_id": "system", 00:18:10.486 "dma_device_type": 1 00:18:10.486 }, 00:18:10.486 { 00:18:10.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.486 "dma_device_type": 2 00:18:10.486 }, 00:18:10.486 { 00:18:10.486 "dma_device_id": "system", 00:18:10.486 "dma_device_type": 1 00:18:10.486 }, 00:18:10.486 { 00:18:10.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.486 "dma_device_type": 2 00:18:10.486 } 00:18:10.486 ], 00:18:10.486 "driver_specific": { 00:18:10.486 "raid": { 00:18:10.486 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:10.486 "strip_size_kb": 0, 00:18:10.486 "state": "online", 00:18:10.486 "raid_level": "raid1", 00:18:10.486 "superblock": true, 00:18:10.486 "num_base_bdevs": 2, 00:18:10.486 "num_base_bdevs_discovered": 2, 00:18:10.486 "num_base_bdevs_operational": 2, 00:18:10.486 "base_bdevs_list": [ 00:18:10.486 { 00:18:10.486 "name": "pt1", 00:18:10.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.486 "is_configured": true, 00:18:10.486 "data_offset": 256, 00:18:10.486 "data_size": 7936 00:18:10.486 }, 00:18:10.486 { 00:18:10.486 "name": "pt2", 00:18:10.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.486 "is_configured": true, 00:18:10.486 "data_offset": 256, 00:18:10.486 "data_size": 7936 00:18:10.486 } 00:18:10.486 ] 00:18:10.486 } 00:18:10.486 } 00:18:10.486 }' 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:10.486 pt2' 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:10.486 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.749 [2024-12-06 18:15:22.661065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=70bd9b52-d650-4b8a-ac02-d7a06611c93c 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 70bd9b52-d650-4b8a-ac02-d7a06611c93c ']' 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.749 [2024-12-06 18:15:22.704655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.749 [2024-12-06 18:15:22.704728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.749 [2024-12-06 18:15:22.704851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.749 [2024-12-06 18:15:22.704918] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.749 [2024-12-06 18:15:22.704931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:10.749 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.750 [2024-12-06 18:15:22.832502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:10.750 [2024-12-06 18:15:22.834570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:10.750 [2024-12-06 18:15:22.834646] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:10.750 [2024-12-06 18:15:22.834724] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:10.750 [2024-12-06 18:15:22.834741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.750 [2024-12-06 18:15:22.834754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:10.750 request: 00:18:10.750 { 00:18:10.750 "name": "raid_bdev1", 00:18:10.750 "raid_level": "raid1", 00:18:10.750 "base_bdevs": [ 00:18:10.750 "malloc1", 00:18:10.750 "malloc2" 00:18:10.750 ], 00:18:10.750 "superblock": false, 00:18:10.750 "method": "bdev_raid_create", 00:18:10.750 "req_id": 1 00:18:10.750 } 00:18:10.750 Got JSON-RPC error response 00:18:10.750 response: 00:18:10.750 { 00:18:10.750 "code": -17, 00:18:10.750 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:10.750 } 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.750 [2024-12-06 18:15:22.896349] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:10.750 [2024-12-06 18:15:22.896471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.750 [2024-12-06 18:15:22.896530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:10.750 [2024-12-06 18:15:22.896573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.750 [2024-12-06 18:15:22.898988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.750 [2024-12-06 18:15:22.899088] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:10.750 [2024-12-06 18:15:22.899222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:10.750 [2024-12-06 18:15:22.899309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:10.750 pt1 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.750 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.073 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.073 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.073 "name": "raid_bdev1", 00:18:11.073 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:11.073 "strip_size_kb": 0, 00:18:11.073 "state": "configuring", 00:18:11.073 "raid_level": "raid1", 00:18:11.073 "superblock": true, 00:18:11.073 "num_base_bdevs": 2, 00:18:11.073 "num_base_bdevs_discovered": 1, 00:18:11.073 "num_base_bdevs_operational": 2, 00:18:11.073 "base_bdevs_list": [ 00:18:11.073 { 00:18:11.073 "name": "pt1", 00:18:11.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.073 "is_configured": true, 00:18:11.073 "data_offset": 256, 00:18:11.073 "data_size": 7936 00:18:11.073 }, 00:18:11.073 { 00:18:11.073 "name": null, 00:18:11.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.073 "is_configured": false, 00:18:11.073 "data_offset": 256, 00:18:11.073 "data_size": 7936 00:18:11.073 } 00:18:11.073 ] 00:18:11.073 }' 00:18:11.073 18:15:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.073 18:15:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.332 [2024-12-06 18:15:23.339647] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.332 [2024-12-06 18:15:23.339735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.332 [2024-12-06 18:15:23.339758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:11.332 [2024-12-06 18:15:23.339769] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.332 [2024-12-06 18:15:23.340251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.332 [2024-12-06 18:15:23.340273] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.332 [2024-12-06 18:15:23.340359] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:11.332 [2024-12-06 18:15:23.340386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.332 [2024-12-06 18:15:23.340519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:11.332 [2024-12-06 18:15:23.340531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:11.332 [2024-12-06 18:15:23.340769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:11.332 [2024-12-06 18:15:23.340925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:11.332 [2024-12-06 18:15:23.340941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:11.332 [2024-12-06 18:15:23.341112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.332 pt2 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.332 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.332 "name": "raid_bdev1", 00:18:11.332 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:11.332 "strip_size_kb": 0, 00:18:11.332 "state": "online", 00:18:11.332 "raid_level": "raid1", 00:18:11.332 "superblock": true, 00:18:11.332 "num_base_bdevs": 2, 00:18:11.332 "num_base_bdevs_discovered": 2, 00:18:11.332 "num_base_bdevs_operational": 2, 00:18:11.332 "base_bdevs_list": [ 00:18:11.332 { 00:18:11.332 "name": "pt1", 00:18:11.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.333 "is_configured": true, 00:18:11.333 "data_offset": 256, 00:18:11.333 "data_size": 7936 00:18:11.333 }, 00:18:11.333 { 00:18:11.333 "name": "pt2", 00:18:11.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.333 "is_configured": true, 00:18:11.333 "data_offset": 256, 00:18:11.333 "data_size": 7936 00:18:11.333 } 00:18:11.333 ] 00:18:11.333 }' 00:18:11.333 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.333 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:11.902 [2024-12-06 18:15:23.827104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.902 "name": "raid_bdev1", 00:18:11.902 "aliases": [ 00:18:11.902 "70bd9b52-d650-4b8a-ac02-d7a06611c93c" 00:18:11.902 ], 00:18:11.902 "product_name": "Raid Volume", 00:18:11.902 "block_size": 4096, 00:18:11.902 "num_blocks": 7936, 00:18:11.902 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:11.902 "assigned_rate_limits": { 00:18:11.902 "rw_ios_per_sec": 0, 00:18:11.902 "rw_mbytes_per_sec": 0, 00:18:11.902 "r_mbytes_per_sec": 0, 00:18:11.902 "w_mbytes_per_sec": 0 00:18:11.902 }, 00:18:11.902 "claimed": false, 00:18:11.902 "zoned": false, 00:18:11.902 "supported_io_types": { 00:18:11.902 "read": true, 00:18:11.902 "write": true, 00:18:11.902 "unmap": false, 00:18:11.902 "flush": false, 00:18:11.902 "reset": true, 00:18:11.902 "nvme_admin": false, 00:18:11.902 "nvme_io": false, 00:18:11.902 "nvme_io_md": false, 00:18:11.902 "write_zeroes": true, 00:18:11.902 "zcopy": false, 00:18:11.902 "get_zone_info": false, 00:18:11.902 "zone_management": false, 00:18:11.902 "zone_append": false, 00:18:11.902 "compare": false, 00:18:11.902 "compare_and_write": false, 00:18:11.902 "abort": false, 00:18:11.902 "seek_hole": false, 00:18:11.902 "seek_data": false, 00:18:11.902 "copy": false, 00:18:11.902 "nvme_iov_md": false 00:18:11.902 }, 00:18:11.902 "memory_domains": [ 00:18:11.902 { 00:18:11.902 "dma_device_id": "system", 00:18:11.902 "dma_device_type": 1 00:18:11.902 }, 00:18:11.902 { 00:18:11.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.902 "dma_device_type": 2 00:18:11.902 }, 00:18:11.902 { 00:18:11.902 "dma_device_id": "system", 00:18:11.902 "dma_device_type": 1 00:18:11.902 }, 00:18:11.902 { 00:18:11.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.902 "dma_device_type": 2 00:18:11.902 } 00:18:11.902 ], 00:18:11.902 "driver_specific": { 00:18:11.902 "raid": { 00:18:11.902 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:11.902 "strip_size_kb": 0, 00:18:11.902 "state": "online", 00:18:11.902 "raid_level": "raid1", 00:18:11.902 "superblock": true, 00:18:11.902 "num_base_bdevs": 2, 00:18:11.902 "num_base_bdevs_discovered": 2, 00:18:11.902 "num_base_bdevs_operational": 2, 00:18:11.902 "base_bdevs_list": [ 00:18:11.902 { 00:18:11.902 "name": "pt1", 00:18:11.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.902 "is_configured": true, 00:18:11.902 "data_offset": 256, 00:18:11.902 "data_size": 7936 00:18:11.902 }, 00:18:11.902 { 00:18:11.902 "name": "pt2", 00:18:11.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.902 "is_configured": true, 00:18:11.902 "data_offset": 256, 00:18:11.902 "data_size": 7936 00:18:11.902 } 00:18:11.902 ] 00:18:11.902 } 00:18:11.902 } 00:18:11.902 }' 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:11.902 pt2' 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.902 18:15:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.902 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.902 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:11.902 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:11.902 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.902 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.902 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.902 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:11.902 [2024-12-06 18:15:24.046728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.902 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 70bd9b52-d650-4b8a-ac02-d7a06611c93c '!=' 70bd9b52-d650-4b8a-ac02-d7a06611c93c ']' 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.162 [2024-12-06 18:15:24.094412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.162 "name": "raid_bdev1", 00:18:12.162 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:12.162 "strip_size_kb": 0, 00:18:12.162 "state": "online", 00:18:12.162 "raid_level": "raid1", 00:18:12.162 "superblock": true, 00:18:12.162 "num_base_bdevs": 2, 00:18:12.162 "num_base_bdevs_discovered": 1, 00:18:12.162 "num_base_bdevs_operational": 1, 00:18:12.162 "base_bdevs_list": [ 00:18:12.162 { 00:18:12.162 "name": null, 00:18:12.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.162 "is_configured": false, 00:18:12.162 "data_offset": 0, 00:18:12.162 "data_size": 7936 00:18:12.162 }, 00:18:12.162 { 00:18:12.162 "name": "pt2", 00:18:12.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.162 "is_configured": true, 00:18:12.162 "data_offset": 256, 00:18:12.162 "data_size": 7936 00:18:12.162 } 00:18:12.162 ] 00:18:12.162 }' 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.162 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.421 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.421 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.421 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.421 [2024-12-06 18:15:24.541611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.421 [2024-12-06 18:15:24.541705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.421 [2024-12-06 18:15:24.541827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.421 [2024-12-06 18:15:24.541906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.421 [2024-12-06 18:15:24.541965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:12.421 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.421 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.421 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.421 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:12.421 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.421 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.681 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.681 [2024-12-06 18:15:24.617476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.681 [2024-12-06 18:15:24.617545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.681 [2024-12-06 18:15:24.617565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:12.681 [2024-12-06 18:15:24.617577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.681 [2024-12-06 18:15:24.620099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.681 [2024-12-06 18:15:24.620141] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.681 [2024-12-06 18:15:24.620255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:12.681 [2024-12-06 18:15:24.620314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.681 [2024-12-06 18:15:24.620436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:12.681 [2024-12-06 18:15:24.620456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:12.681 [2024-12-06 18:15:24.620710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:12.681 [2024-12-06 18:15:24.620892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:12.682 [2024-12-06 18:15:24.620903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:12.682 [2024-12-06 18:15:24.621092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.682 pt2 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.682 "name": "raid_bdev1", 00:18:12.682 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:12.682 "strip_size_kb": 0, 00:18:12.682 "state": "online", 00:18:12.682 "raid_level": "raid1", 00:18:12.682 "superblock": true, 00:18:12.682 "num_base_bdevs": 2, 00:18:12.682 "num_base_bdevs_discovered": 1, 00:18:12.682 "num_base_bdevs_operational": 1, 00:18:12.682 "base_bdevs_list": [ 00:18:12.682 { 00:18:12.682 "name": null, 00:18:12.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.682 "is_configured": false, 00:18:12.682 "data_offset": 256, 00:18:12.682 "data_size": 7936 00:18:12.682 }, 00:18:12.682 { 00:18:12.682 "name": "pt2", 00:18:12.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.682 "is_configured": true, 00:18:12.682 "data_offset": 256, 00:18:12.682 "data_size": 7936 00:18:12.682 } 00:18:12.682 ] 00:18:12.682 }' 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.682 18:15:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.942 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.942 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.942 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.942 [2024-12-06 18:15:25.076691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.942 [2024-12-06 18:15:25.076781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.942 [2024-12-06 18:15:25.076896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.942 [2024-12-06 18:15:25.076973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.942 [2024-12-06 18:15:25.077036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:12.942 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.942 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:12.942 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.942 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.942 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.942 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.200 [2024-12-06 18:15:25.124635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.200 [2024-12-06 18:15:25.124748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.200 [2024-12-06 18:15:25.124790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:13.200 [2024-12-06 18:15:25.124822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.200 [2024-12-06 18:15:25.127141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.200 [2024-12-06 18:15:25.127213] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.200 [2024-12-06 18:15:25.127335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:13.200 [2024-12-06 18:15:25.127412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.200 [2024-12-06 18:15:25.127627] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:13.200 pt1 00:18:13.200 [2024-12-06 18:15:25.127691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.200 [2024-12-06 18:15:25.127714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:13.200 [2024-12-06 18:15:25.127787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.200 [2024-12-06 18:15:25.127875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:13.200 [2024-12-06 18:15:25.127884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.200 [2024-12-06 18:15:25.128181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:13.200 [2024-12-06 18:15:25.128362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:13.200 [2024-12-06 18:15:25.128377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:13.200 [2024-12-06 18:15:25.128597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.200 "name": "raid_bdev1", 00:18:13.200 "uuid": "70bd9b52-d650-4b8a-ac02-d7a06611c93c", 00:18:13.200 "strip_size_kb": 0, 00:18:13.200 "state": "online", 00:18:13.200 "raid_level": "raid1", 00:18:13.200 "superblock": true, 00:18:13.200 "num_base_bdevs": 2, 00:18:13.200 "num_base_bdevs_discovered": 1, 00:18:13.200 "num_base_bdevs_operational": 1, 00:18:13.200 "base_bdevs_list": [ 00:18:13.200 { 00:18:13.200 "name": null, 00:18:13.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.200 "is_configured": false, 00:18:13.200 "data_offset": 256, 00:18:13.200 "data_size": 7936 00:18:13.200 }, 00:18:13.200 { 00:18:13.200 "name": "pt2", 00:18:13.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.200 "is_configured": true, 00:18:13.200 "data_offset": 256, 00:18:13.200 "data_size": 7936 00:18:13.200 } 00:18:13.200 ] 00:18:13.200 }' 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.200 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.458 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:13.458 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.458 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:13.458 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.458 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.717 [2024-12-06 18:15:25.640035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 70bd9b52-d650-4b8a-ac02-d7a06611c93c '!=' 70bd9b52-d650-4b8a-ac02-d7a06611c93c ']' 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86762 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86762 ']' 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86762 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86762 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.717 killing process with pid 86762 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86762' 00:18:13.717 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86762 00:18:13.717 [2024-12-06 18:15:25.726630] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.717 [2024-12-06 18:15:25.726743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.718 [2024-12-06 18:15:25.726798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.718 18:15:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86762 00:18:13.718 [2024-12-06 18:15:25.726813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:13.976 [2024-12-06 18:15:25.938577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.353 18:15:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:15.353 00:18:15.353 real 0m6.219s 00:18:15.353 user 0m9.435s 00:18:15.353 sys 0m1.094s 00:18:15.353 18:15:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.353 ************************************ 00:18:15.353 END TEST raid_superblock_test_4k 00:18:15.353 ************************************ 00:18:15.353 18:15:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.353 18:15:27 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:15.353 18:15:27 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:15.353 18:15:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:15.353 18:15:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.353 18:15:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.353 ************************************ 00:18:15.353 START TEST raid_rebuild_test_sb_4k 00:18:15.353 ************************************ 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87091 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87091 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87091 ']' 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.353 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.354 18:15:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.354 [2024-12-06 18:15:27.263884] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:18:15.354 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:15.354 Zero copy mechanism will not be used. 00:18:15.354 [2024-12-06 18:15:27.264092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87091 ] 00:18:15.354 [2024-12-06 18:15:27.439458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.613 [2024-12-06 18:15:27.550998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.613 [2024-12-06 18:15:27.744408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.613 [2024-12-06 18:15:27.744564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.182 BaseBdev1_malloc 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.182 [2024-12-06 18:15:28.157943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:16.182 [2024-12-06 18:15:28.158013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.182 [2024-12-06 18:15:28.158039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:16.182 [2024-12-06 18:15:28.158051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.182 [2024-12-06 18:15:28.160497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.182 [2024-12-06 18:15:28.160546] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:16.182 BaseBdev1 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.182 BaseBdev2_malloc 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.182 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.182 [2024-12-06 18:15:28.215295] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:16.182 [2024-12-06 18:15:28.215404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.182 [2024-12-06 18:15:28.215449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:16.182 [2024-12-06 18:15:28.215461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.183 [2024-12-06 18:15:28.217719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.183 [2024-12-06 18:15:28.217757] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:16.183 BaseBdev2 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.183 spare_malloc 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.183 spare_delay 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.183 [2024-12-06 18:15:28.295207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.183 [2024-12-06 18:15:28.295268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.183 [2024-12-06 18:15:28.295305] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:16.183 [2024-12-06 18:15:28.295317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.183 [2024-12-06 18:15:28.297607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.183 [2024-12-06 18:15:28.297648] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.183 spare 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.183 [2024-12-06 18:15:28.303247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.183 [2024-12-06 18:15:28.305235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.183 [2024-12-06 18:15:28.305430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:16.183 [2024-12-06 18:15:28.305445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:16.183 [2024-12-06 18:15:28.305689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:16.183 [2024-12-06 18:15:28.305860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:16.183 [2024-12-06 18:15:28.305869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:16.183 [2024-12-06 18:15:28.306018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.183 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.447 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.447 "name": "raid_bdev1", 00:18:16.447 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:16.447 "strip_size_kb": 0, 00:18:16.447 "state": "online", 00:18:16.447 "raid_level": "raid1", 00:18:16.447 "superblock": true, 00:18:16.448 "num_base_bdevs": 2, 00:18:16.448 "num_base_bdevs_discovered": 2, 00:18:16.448 "num_base_bdevs_operational": 2, 00:18:16.448 "base_bdevs_list": [ 00:18:16.448 { 00:18:16.448 "name": "BaseBdev1", 00:18:16.448 "uuid": "28196e3c-c625-5872-bd8c-2f77fbd32b10", 00:18:16.448 "is_configured": true, 00:18:16.448 "data_offset": 256, 00:18:16.448 "data_size": 7936 00:18:16.448 }, 00:18:16.448 { 00:18:16.448 "name": "BaseBdev2", 00:18:16.448 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:16.448 "is_configured": true, 00:18:16.448 "data_offset": 256, 00:18:16.448 "data_size": 7936 00:18:16.448 } 00:18:16.448 ] 00:18:16.448 }' 00:18:16.448 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.448 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 [2024-12-06 18:15:28.758877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.713 18:15:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:16.976 [2024-12-06 18:15:29.058086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:16.976 /dev/nbd0 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.976 1+0 records in 00:18:16.976 1+0 records out 00:18:16.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538314 s, 7.6 MB/s 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:16.976 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:17.909 7936+0 records in 00:18:17.909 7936+0 records out 00:18:17.909 32505856 bytes (33 MB, 31 MiB) copied, 0.68602 s, 47.4 MB/s 00:18:17.909 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:17.909 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.909 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:17.909 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.909 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:17.909 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.909 18:15:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:18.168 [2024-12-06 18:15:30.079277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.168 [2024-12-06 18:15:30.103332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.168 "name": "raid_bdev1", 00:18:18.168 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:18.168 "strip_size_kb": 0, 00:18:18.168 "state": "online", 00:18:18.168 "raid_level": "raid1", 00:18:18.168 "superblock": true, 00:18:18.168 "num_base_bdevs": 2, 00:18:18.168 "num_base_bdevs_discovered": 1, 00:18:18.168 "num_base_bdevs_operational": 1, 00:18:18.168 "base_bdevs_list": [ 00:18:18.168 { 00:18:18.168 "name": null, 00:18:18.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.168 "is_configured": false, 00:18:18.168 "data_offset": 0, 00:18:18.168 "data_size": 7936 00:18:18.168 }, 00:18:18.168 { 00:18:18.168 "name": "BaseBdev2", 00:18:18.168 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:18.168 "is_configured": true, 00:18:18.168 "data_offset": 256, 00:18:18.168 "data_size": 7936 00:18:18.168 } 00:18:18.168 ] 00:18:18.168 }' 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.168 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.426 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:18.426 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.426 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.426 [2024-12-06 18:15:30.582539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.686 [2024-12-06 18:15:30.602581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:18.686 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.686 18:15:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:18.686 [2024-12-06 18:15:30.604726] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.625 "name": "raid_bdev1", 00:18:19.625 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:19.625 "strip_size_kb": 0, 00:18:19.625 "state": "online", 00:18:19.625 "raid_level": "raid1", 00:18:19.625 "superblock": true, 00:18:19.625 "num_base_bdevs": 2, 00:18:19.625 "num_base_bdevs_discovered": 2, 00:18:19.625 "num_base_bdevs_operational": 2, 00:18:19.625 "process": { 00:18:19.625 "type": "rebuild", 00:18:19.625 "target": "spare", 00:18:19.625 "progress": { 00:18:19.625 "blocks": 2560, 00:18:19.625 "percent": 32 00:18:19.625 } 00:18:19.625 }, 00:18:19.625 "base_bdevs_list": [ 00:18:19.625 { 00:18:19.625 "name": "spare", 00:18:19.625 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:19.625 "is_configured": true, 00:18:19.625 "data_offset": 256, 00:18:19.625 "data_size": 7936 00:18:19.625 }, 00:18:19.625 { 00:18:19.625 "name": "BaseBdev2", 00:18:19.625 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:19.625 "is_configured": true, 00:18:19.625 "data_offset": 256, 00:18:19.625 "data_size": 7936 00:18:19.625 } 00:18:19.625 ] 00:18:19.625 }' 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.625 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:19.626 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.626 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.626 [2024-12-06 18:15:31.756174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.885 [2024-12-06 18:15:31.810423] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:19.885 [2024-12-06 18:15:31.810527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.885 [2024-12-06 18:15:31.810546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.885 [2024-12-06 18:15:31.810557] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.885 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.886 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.886 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.886 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.886 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.886 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.886 "name": "raid_bdev1", 00:18:19.886 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:19.886 "strip_size_kb": 0, 00:18:19.886 "state": "online", 00:18:19.886 "raid_level": "raid1", 00:18:19.886 "superblock": true, 00:18:19.886 "num_base_bdevs": 2, 00:18:19.886 "num_base_bdevs_discovered": 1, 00:18:19.886 "num_base_bdevs_operational": 1, 00:18:19.886 "base_bdevs_list": [ 00:18:19.886 { 00:18:19.886 "name": null, 00:18:19.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.886 "is_configured": false, 00:18:19.886 "data_offset": 0, 00:18:19.886 "data_size": 7936 00:18:19.886 }, 00:18:19.886 { 00:18:19.886 "name": "BaseBdev2", 00:18:19.886 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:19.886 "is_configured": true, 00:18:19.886 "data_offset": 256, 00:18:19.886 "data_size": 7936 00:18:19.886 } 00:18:19.886 ] 00:18:19.886 }' 00:18:19.886 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.886 18:15:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.145 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.405 "name": "raid_bdev1", 00:18:20.405 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:20.405 "strip_size_kb": 0, 00:18:20.405 "state": "online", 00:18:20.405 "raid_level": "raid1", 00:18:20.405 "superblock": true, 00:18:20.405 "num_base_bdevs": 2, 00:18:20.405 "num_base_bdevs_discovered": 1, 00:18:20.405 "num_base_bdevs_operational": 1, 00:18:20.405 "base_bdevs_list": [ 00:18:20.405 { 00:18:20.405 "name": null, 00:18:20.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.405 "is_configured": false, 00:18:20.405 "data_offset": 0, 00:18:20.405 "data_size": 7936 00:18:20.405 }, 00:18:20.405 { 00:18:20.405 "name": "BaseBdev2", 00:18:20.405 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:20.405 "is_configured": true, 00:18:20.405 "data_offset": 256, 00:18:20.405 "data_size": 7936 00:18:20.405 } 00:18:20.405 ] 00:18:20.405 }' 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.405 [2024-12-06 18:15:32.411906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.405 [2024-12-06 18:15:32.429274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.405 18:15:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:20.405 [2024-12-06 18:15:32.431364] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.343 "name": "raid_bdev1", 00:18:21.343 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:21.343 "strip_size_kb": 0, 00:18:21.343 "state": "online", 00:18:21.343 "raid_level": "raid1", 00:18:21.343 "superblock": true, 00:18:21.343 "num_base_bdevs": 2, 00:18:21.343 "num_base_bdevs_discovered": 2, 00:18:21.343 "num_base_bdevs_operational": 2, 00:18:21.343 "process": { 00:18:21.343 "type": "rebuild", 00:18:21.343 "target": "spare", 00:18:21.343 "progress": { 00:18:21.343 "blocks": 2560, 00:18:21.343 "percent": 32 00:18:21.343 } 00:18:21.343 }, 00:18:21.343 "base_bdevs_list": [ 00:18:21.343 { 00:18:21.343 "name": "spare", 00:18:21.343 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:21.343 "is_configured": true, 00:18:21.343 "data_offset": 256, 00:18:21.343 "data_size": 7936 00:18:21.343 }, 00:18:21.343 { 00:18:21.343 "name": "BaseBdev2", 00:18:21.343 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:21.343 "is_configured": true, 00:18:21.343 "data_offset": 256, 00:18:21.343 "data_size": 7936 00:18:21.343 } 00:18:21.343 ] 00:18:21.343 }' 00:18:21.343 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:21.603 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=707 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.603 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.604 "name": "raid_bdev1", 00:18:21.604 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:21.604 "strip_size_kb": 0, 00:18:21.604 "state": "online", 00:18:21.604 "raid_level": "raid1", 00:18:21.604 "superblock": true, 00:18:21.604 "num_base_bdevs": 2, 00:18:21.604 "num_base_bdevs_discovered": 2, 00:18:21.604 "num_base_bdevs_operational": 2, 00:18:21.604 "process": { 00:18:21.604 "type": "rebuild", 00:18:21.604 "target": "spare", 00:18:21.604 "progress": { 00:18:21.604 "blocks": 2816, 00:18:21.604 "percent": 35 00:18:21.604 } 00:18:21.604 }, 00:18:21.604 "base_bdevs_list": [ 00:18:21.604 { 00:18:21.604 "name": "spare", 00:18:21.604 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:21.604 "is_configured": true, 00:18:21.604 "data_offset": 256, 00:18:21.604 "data_size": 7936 00:18:21.604 }, 00:18:21.604 { 00:18:21.604 "name": "BaseBdev2", 00:18:21.604 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:21.604 "is_configured": true, 00:18:21.604 "data_offset": 256, 00:18:21.604 "data_size": 7936 00:18:21.604 } 00:18:21.604 ] 00:18:21.604 }' 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.604 18:15:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.542 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.542 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.542 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.542 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.542 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.542 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.542 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.542 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.542 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.801 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.801 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.801 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.801 "name": "raid_bdev1", 00:18:22.801 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:22.801 "strip_size_kb": 0, 00:18:22.801 "state": "online", 00:18:22.801 "raid_level": "raid1", 00:18:22.801 "superblock": true, 00:18:22.801 "num_base_bdevs": 2, 00:18:22.801 "num_base_bdevs_discovered": 2, 00:18:22.801 "num_base_bdevs_operational": 2, 00:18:22.801 "process": { 00:18:22.801 "type": "rebuild", 00:18:22.801 "target": "spare", 00:18:22.801 "progress": { 00:18:22.801 "blocks": 5632, 00:18:22.801 "percent": 70 00:18:22.801 } 00:18:22.801 }, 00:18:22.801 "base_bdevs_list": [ 00:18:22.801 { 00:18:22.801 "name": "spare", 00:18:22.801 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:22.801 "is_configured": true, 00:18:22.801 "data_offset": 256, 00:18:22.801 "data_size": 7936 00:18:22.801 }, 00:18:22.801 { 00:18:22.801 "name": "BaseBdev2", 00:18:22.801 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:22.801 "is_configured": true, 00:18:22.801 "data_offset": 256, 00:18:22.801 "data_size": 7936 00:18:22.801 } 00:18:22.801 ] 00:18:22.801 }' 00:18:22.801 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.801 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.801 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.801 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.801 18:15:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.737 [2024-12-06 18:15:35.545951] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:23.737 [2024-12-06 18:15:35.546107] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:23.737 [2024-12-06 18:15:35.546279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.737 "name": "raid_bdev1", 00:18:23.737 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:23.737 "strip_size_kb": 0, 00:18:23.737 "state": "online", 00:18:23.738 "raid_level": "raid1", 00:18:23.738 "superblock": true, 00:18:23.738 "num_base_bdevs": 2, 00:18:23.738 "num_base_bdevs_discovered": 2, 00:18:23.738 "num_base_bdevs_operational": 2, 00:18:23.738 "base_bdevs_list": [ 00:18:23.738 { 00:18:23.738 "name": "spare", 00:18:23.738 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:23.738 "is_configured": true, 00:18:23.738 "data_offset": 256, 00:18:23.738 "data_size": 7936 00:18:23.738 }, 00:18:23.738 { 00:18:23.738 "name": "BaseBdev2", 00:18:23.738 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:23.738 "is_configured": true, 00:18:23.738 "data_offset": 256, 00:18:23.738 "data_size": 7936 00:18:23.738 } 00:18:23.738 ] 00:18:23.738 }' 00:18:23.738 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.997 18:15:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.997 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.997 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.997 "name": "raid_bdev1", 00:18:23.997 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:23.997 "strip_size_kb": 0, 00:18:23.997 "state": "online", 00:18:23.997 "raid_level": "raid1", 00:18:23.997 "superblock": true, 00:18:23.997 "num_base_bdevs": 2, 00:18:23.997 "num_base_bdevs_discovered": 2, 00:18:23.997 "num_base_bdevs_operational": 2, 00:18:23.997 "base_bdevs_list": [ 00:18:23.997 { 00:18:23.997 "name": "spare", 00:18:23.997 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:23.997 "is_configured": true, 00:18:23.997 "data_offset": 256, 00:18:23.997 "data_size": 7936 00:18:23.997 }, 00:18:23.997 { 00:18:23.997 "name": "BaseBdev2", 00:18:23.997 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:23.997 "is_configured": true, 00:18:23.997 "data_offset": 256, 00:18:23.998 "data_size": 7936 00:18:23.998 } 00:18:23.998 ] 00:18:23.998 }' 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.998 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.258 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.258 "name": "raid_bdev1", 00:18:24.258 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:24.258 "strip_size_kb": 0, 00:18:24.258 "state": "online", 00:18:24.258 "raid_level": "raid1", 00:18:24.258 "superblock": true, 00:18:24.258 "num_base_bdevs": 2, 00:18:24.258 "num_base_bdevs_discovered": 2, 00:18:24.258 "num_base_bdevs_operational": 2, 00:18:24.258 "base_bdevs_list": [ 00:18:24.258 { 00:18:24.258 "name": "spare", 00:18:24.258 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:24.258 "is_configured": true, 00:18:24.258 "data_offset": 256, 00:18:24.258 "data_size": 7936 00:18:24.258 }, 00:18:24.258 { 00:18:24.258 "name": "BaseBdev2", 00:18:24.258 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:24.258 "is_configured": true, 00:18:24.258 "data_offset": 256, 00:18:24.258 "data_size": 7936 00:18:24.258 } 00:18:24.258 ] 00:18:24.258 }' 00:18:24.258 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.258 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.519 [2024-12-06 18:15:36.575299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.519 [2024-12-06 18:15:36.575332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.519 [2024-12-06 18:15:36.575422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.519 [2024-12-06 18:15:36.575493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.519 [2024-12-06 18:15:36.575505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.519 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:24.779 /dev/nbd0 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.779 1+0 records in 00:18:24.779 1+0 records out 00:18:24.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549714 s, 7.5 MB/s 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.779 18:15:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:25.039 /dev/nbd1 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:25.039 1+0 records in 00:18:25.039 1+0 records out 00:18:25.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273629 s, 15.0 MB/s 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:25.039 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:25.303 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:25.303 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.303 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.303 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.303 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:25.303 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.303 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.571 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.832 [2024-12-06 18:15:37.800217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:25.832 [2024-12-06 18:15:37.800274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.832 [2024-12-06 18:15:37.800299] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:25.832 [2024-12-06 18:15:37.800308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.832 [2024-12-06 18:15:37.802522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.832 [2024-12-06 18:15:37.802550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:25.832 [2024-12-06 18:15:37.802651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:25.832 [2024-12-06 18:15:37.802709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.832 [2024-12-06 18:15:37.802862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.832 spare 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.832 [2024-12-06 18:15:37.902790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:25.832 [2024-12-06 18:15:37.902857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:25.832 [2024-12-06 18:15:37.903265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:25.832 [2024-12-06 18:15:37.903518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:25.832 [2024-12-06 18:15:37.903544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:25.832 [2024-12-06 18:15:37.903774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.832 "name": "raid_bdev1", 00:18:25.832 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:25.832 "strip_size_kb": 0, 00:18:25.832 "state": "online", 00:18:25.832 "raid_level": "raid1", 00:18:25.832 "superblock": true, 00:18:25.832 "num_base_bdevs": 2, 00:18:25.832 "num_base_bdevs_discovered": 2, 00:18:25.832 "num_base_bdevs_operational": 2, 00:18:25.832 "base_bdevs_list": [ 00:18:25.832 { 00:18:25.832 "name": "spare", 00:18:25.832 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:25.832 "is_configured": true, 00:18:25.832 "data_offset": 256, 00:18:25.832 "data_size": 7936 00:18:25.832 }, 00:18:25.832 { 00:18:25.832 "name": "BaseBdev2", 00:18:25.832 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:25.832 "is_configured": true, 00:18:25.832 "data_offset": 256, 00:18:25.832 "data_size": 7936 00:18:25.832 } 00:18:25.832 ] 00:18:25.832 }' 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.832 18:15:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.401 "name": "raid_bdev1", 00:18:26.401 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:26.401 "strip_size_kb": 0, 00:18:26.401 "state": "online", 00:18:26.401 "raid_level": "raid1", 00:18:26.401 "superblock": true, 00:18:26.401 "num_base_bdevs": 2, 00:18:26.401 "num_base_bdevs_discovered": 2, 00:18:26.401 "num_base_bdevs_operational": 2, 00:18:26.401 "base_bdevs_list": [ 00:18:26.401 { 00:18:26.401 "name": "spare", 00:18:26.401 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:26.401 "is_configured": true, 00:18:26.401 "data_offset": 256, 00:18:26.401 "data_size": 7936 00:18:26.401 }, 00:18:26.401 { 00:18:26.401 "name": "BaseBdev2", 00:18:26.401 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:26.401 "is_configured": true, 00:18:26.401 "data_offset": 256, 00:18:26.401 "data_size": 7936 00:18:26.401 } 00:18:26.401 ] 00:18:26.401 }' 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:26.401 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.402 [2024-12-06 18:15:38.551077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.402 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.677 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.677 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.677 "name": "raid_bdev1", 00:18:26.677 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:26.677 "strip_size_kb": 0, 00:18:26.677 "state": "online", 00:18:26.677 "raid_level": "raid1", 00:18:26.677 "superblock": true, 00:18:26.677 "num_base_bdevs": 2, 00:18:26.677 "num_base_bdevs_discovered": 1, 00:18:26.677 "num_base_bdevs_operational": 1, 00:18:26.677 "base_bdevs_list": [ 00:18:26.677 { 00:18:26.677 "name": null, 00:18:26.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.677 "is_configured": false, 00:18:26.677 "data_offset": 0, 00:18:26.677 "data_size": 7936 00:18:26.677 }, 00:18:26.677 { 00:18:26.677 "name": "BaseBdev2", 00:18:26.677 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:26.677 "is_configured": true, 00:18:26.677 "data_offset": 256, 00:18:26.677 "data_size": 7936 00:18:26.677 } 00:18:26.677 ] 00:18:26.677 }' 00:18:26.677 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.677 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.937 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:26.937 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.937 18:15:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.937 [2024-12-06 18:15:38.990330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.937 [2024-12-06 18:15:38.990601] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.937 [2024-12-06 18:15:38.990664] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:26.937 [2024-12-06 18:15:38.990757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.937 [2024-12-06 18:15:39.006893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:26.937 18:15:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.937 18:15:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:26.937 [2024-12-06 18:15:39.009062] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.878 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.139 "name": "raid_bdev1", 00:18:28.139 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:28.139 "strip_size_kb": 0, 00:18:28.139 "state": "online", 00:18:28.139 "raid_level": "raid1", 00:18:28.139 "superblock": true, 00:18:28.139 "num_base_bdevs": 2, 00:18:28.139 "num_base_bdevs_discovered": 2, 00:18:28.139 "num_base_bdevs_operational": 2, 00:18:28.139 "process": { 00:18:28.139 "type": "rebuild", 00:18:28.139 "target": "spare", 00:18:28.139 "progress": { 00:18:28.139 "blocks": 2560, 00:18:28.139 "percent": 32 00:18:28.139 } 00:18:28.139 }, 00:18:28.139 "base_bdevs_list": [ 00:18:28.139 { 00:18:28.139 "name": "spare", 00:18:28.139 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:28.139 "is_configured": true, 00:18:28.139 "data_offset": 256, 00:18:28.139 "data_size": 7936 00:18:28.139 }, 00:18:28.139 { 00:18:28.139 "name": "BaseBdev2", 00:18:28.139 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:28.139 "is_configured": true, 00:18:28.139 "data_offset": 256, 00:18:28.139 "data_size": 7936 00:18:28.139 } 00:18:28.139 ] 00:18:28.139 }' 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.139 [2024-12-06 18:15:40.156928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.139 [2024-12-06 18:15:40.215266] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:28.139 [2024-12-06 18:15:40.215411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.139 [2024-12-06 18:15:40.215445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.139 [2024-12-06 18:15:40.215456] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.139 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.399 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.399 "name": "raid_bdev1", 00:18:28.399 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:28.399 "strip_size_kb": 0, 00:18:28.399 "state": "online", 00:18:28.399 "raid_level": "raid1", 00:18:28.399 "superblock": true, 00:18:28.399 "num_base_bdevs": 2, 00:18:28.399 "num_base_bdevs_discovered": 1, 00:18:28.399 "num_base_bdevs_operational": 1, 00:18:28.399 "base_bdevs_list": [ 00:18:28.399 { 00:18:28.399 "name": null, 00:18:28.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.399 "is_configured": false, 00:18:28.399 "data_offset": 0, 00:18:28.399 "data_size": 7936 00:18:28.399 }, 00:18:28.399 { 00:18:28.399 "name": "BaseBdev2", 00:18:28.399 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:28.399 "is_configured": true, 00:18:28.399 "data_offset": 256, 00:18:28.399 "data_size": 7936 00:18:28.399 } 00:18:28.399 ] 00:18:28.399 }' 00:18:28.399 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.399 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.659 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:28.659 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.659 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.659 [2024-12-06 18:15:40.727887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:28.659 [2024-12-06 18:15:40.728019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.659 [2024-12-06 18:15:40.728060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:28.659 [2024-12-06 18:15:40.728106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.659 [2024-12-06 18:15:40.728615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.659 [2024-12-06 18:15:40.728679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:28.659 [2024-12-06 18:15:40.728814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:28.659 [2024-12-06 18:15:40.728859] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.659 [2024-12-06 18:15:40.728907] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:28.659 [2024-12-06 18:15:40.728957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.659 [2024-12-06 18:15:40.745082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:28.659 spare 00:18:28.659 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.659 [2024-12-06 18:15:40.747039] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.659 18:15:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:29.597 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.597 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.597 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.597 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.597 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.597 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.597 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.597 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.597 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.856 "name": "raid_bdev1", 00:18:29.856 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:29.856 "strip_size_kb": 0, 00:18:29.856 "state": "online", 00:18:29.856 "raid_level": "raid1", 00:18:29.856 "superblock": true, 00:18:29.856 "num_base_bdevs": 2, 00:18:29.856 "num_base_bdevs_discovered": 2, 00:18:29.856 "num_base_bdevs_operational": 2, 00:18:29.856 "process": { 00:18:29.856 "type": "rebuild", 00:18:29.856 "target": "spare", 00:18:29.856 "progress": { 00:18:29.856 "blocks": 2560, 00:18:29.856 "percent": 32 00:18:29.856 } 00:18:29.856 }, 00:18:29.856 "base_bdevs_list": [ 00:18:29.856 { 00:18:29.856 "name": "spare", 00:18:29.856 "uuid": "45e73bf4-2387-541d-9ac9-b65eb2b1450e", 00:18:29.856 "is_configured": true, 00:18:29.856 "data_offset": 256, 00:18:29.856 "data_size": 7936 00:18:29.856 }, 00:18:29.856 { 00:18:29.856 "name": "BaseBdev2", 00:18:29.856 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:29.856 "is_configured": true, 00:18:29.856 "data_offset": 256, 00:18:29.856 "data_size": 7936 00:18:29.856 } 00:18:29.856 ] 00:18:29.856 }' 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.856 [2024-12-06 18:15:41.911206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.856 [2024-12-06 18:15:41.952860] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:29.856 [2024-12-06 18:15:41.952925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.856 [2024-12-06 18:15:41.952944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.856 [2024-12-06 18:15:41.952950] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:29.856 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.857 18:15:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.857 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.115 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.115 "name": "raid_bdev1", 00:18:30.115 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:30.115 "strip_size_kb": 0, 00:18:30.115 "state": "online", 00:18:30.115 "raid_level": "raid1", 00:18:30.115 "superblock": true, 00:18:30.115 "num_base_bdevs": 2, 00:18:30.115 "num_base_bdevs_discovered": 1, 00:18:30.115 "num_base_bdevs_operational": 1, 00:18:30.115 "base_bdevs_list": [ 00:18:30.115 { 00:18:30.115 "name": null, 00:18:30.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.115 "is_configured": false, 00:18:30.115 "data_offset": 0, 00:18:30.115 "data_size": 7936 00:18:30.115 }, 00:18:30.115 { 00:18:30.115 "name": "BaseBdev2", 00:18:30.115 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:30.115 "is_configured": true, 00:18:30.115 "data_offset": 256, 00:18:30.115 "data_size": 7936 00:18:30.115 } 00:18:30.115 ] 00:18:30.115 }' 00:18:30.115 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.115 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.374 "name": "raid_bdev1", 00:18:30.374 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:30.374 "strip_size_kb": 0, 00:18:30.374 "state": "online", 00:18:30.374 "raid_level": "raid1", 00:18:30.374 "superblock": true, 00:18:30.374 "num_base_bdevs": 2, 00:18:30.374 "num_base_bdevs_discovered": 1, 00:18:30.374 "num_base_bdevs_operational": 1, 00:18:30.374 "base_bdevs_list": [ 00:18:30.374 { 00:18:30.374 "name": null, 00:18:30.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.374 "is_configured": false, 00:18:30.374 "data_offset": 0, 00:18:30.374 "data_size": 7936 00:18:30.374 }, 00:18:30.374 { 00:18:30.374 "name": "BaseBdev2", 00:18:30.374 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:30.374 "is_configured": true, 00:18:30.374 "data_offset": 256, 00:18:30.374 "data_size": 7936 00:18:30.374 } 00:18:30.374 ] 00:18:30.374 }' 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.374 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.634 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.634 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:30.634 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.634 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.634 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.634 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:30.634 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.634 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.635 [2024-12-06 18:15:42.563153] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:30.635 [2024-12-06 18:15:42.563218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.635 [2024-12-06 18:15:42.563250] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:30.635 [2024-12-06 18:15:42.563272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.635 [2024-12-06 18:15:42.563812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.635 [2024-12-06 18:15:42.563841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.635 [2024-12-06 18:15:42.563935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:30.635 [2024-12-06 18:15:42.563954] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.635 [2024-12-06 18:15:42.563967] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:30.635 [2024-12-06 18:15:42.563978] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:30.635 BaseBdev1 00:18:30.635 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.635 18:15:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.573 "name": "raid_bdev1", 00:18:31.573 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:31.573 "strip_size_kb": 0, 00:18:31.573 "state": "online", 00:18:31.573 "raid_level": "raid1", 00:18:31.573 "superblock": true, 00:18:31.573 "num_base_bdevs": 2, 00:18:31.573 "num_base_bdevs_discovered": 1, 00:18:31.573 "num_base_bdevs_operational": 1, 00:18:31.573 "base_bdevs_list": [ 00:18:31.573 { 00:18:31.573 "name": null, 00:18:31.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.573 "is_configured": false, 00:18:31.573 "data_offset": 0, 00:18:31.573 "data_size": 7936 00:18:31.573 }, 00:18:31.573 { 00:18:31.573 "name": "BaseBdev2", 00:18:31.573 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:31.573 "is_configured": true, 00:18:31.573 "data_offset": 256, 00:18:31.573 "data_size": 7936 00:18:31.573 } 00:18:31.573 ] 00:18:31.573 }' 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.573 18:15:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.142 "name": "raid_bdev1", 00:18:32.142 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:32.142 "strip_size_kb": 0, 00:18:32.142 "state": "online", 00:18:32.142 "raid_level": "raid1", 00:18:32.142 "superblock": true, 00:18:32.142 "num_base_bdevs": 2, 00:18:32.142 "num_base_bdevs_discovered": 1, 00:18:32.142 "num_base_bdevs_operational": 1, 00:18:32.142 "base_bdevs_list": [ 00:18:32.142 { 00:18:32.142 "name": null, 00:18:32.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.142 "is_configured": false, 00:18:32.142 "data_offset": 0, 00:18:32.142 "data_size": 7936 00:18:32.142 }, 00:18:32.142 { 00:18:32.142 "name": "BaseBdev2", 00:18:32.142 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:32.142 "is_configured": true, 00:18:32.142 "data_offset": 256, 00:18:32.142 "data_size": 7936 00:18:32.142 } 00:18:32.142 ] 00:18:32.142 }' 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.142 [2024-12-06 18:15:44.196362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.142 [2024-12-06 18:15:44.196617] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:32.142 [2024-12-06 18:15:44.196677] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:32.142 request: 00:18:32.142 { 00:18:32.142 "base_bdev": "BaseBdev1", 00:18:32.142 "raid_bdev": "raid_bdev1", 00:18:32.142 "method": "bdev_raid_add_base_bdev", 00:18:32.142 "req_id": 1 00:18:32.142 } 00:18:32.142 Got JSON-RPC error response 00:18:32.142 response: 00:18:32.142 { 00:18:32.142 "code": -22, 00:18:32.142 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:32.142 } 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.142 18:15:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.080 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.338 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.338 "name": "raid_bdev1", 00:18:33.338 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:33.338 "strip_size_kb": 0, 00:18:33.338 "state": "online", 00:18:33.338 "raid_level": "raid1", 00:18:33.338 "superblock": true, 00:18:33.338 "num_base_bdevs": 2, 00:18:33.338 "num_base_bdevs_discovered": 1, 00:18:33.338 "num_base_bdevs_operational": 1, 00:18:33.338 "base_bdevs_list": [ 00:18:33.338 { 00:18:33.338 "name": null, 00:18:33.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.338 "is_configured": false, 00:18:33.338 "data_offset": 0, 00:18:33.338 "data_size": 7936 00:18:33.338 }, 00:18:33.338 { 00:18:33.338 "name": "BaseBdev2", 00:18:33.338 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:33.338 "is_configured": true, 00:18:33.338 "data_offset": 256, 00:18:33.338 "data_size": 7936 00:18:33.338 } 00:18:33.338 ] 00:18:33.338 }' 00:18:33.338 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.338 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.620 "name": "raid_bdev1", 00:18:33.620 "uuid": "558e5c1a-b564-4a26-9db9-9838206598f2", 00:18:33.620 "strip_size_kb": 0, 00:18:33.620 "state": "online", 00:18:33.620 "raid_level": "raid1", 00:18:33.620 "superblock": true, 00:18:33.620 "num_base_bdevs": 2, 00:18:33.620 "num_base_bdevs_discovered": 1, 00:18:33.620 "num_base_bdevs_operational": 1, 00:18:33.620 "base_bdevs_list": [ 00:18:33.620 { 00:18:33.620 "name": null, 00:18:33.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.620 "is_configured": false, 00:18:33.620 "data_offset": 0, 00:18:33.620 "data_size": 7936 00:18:33.620 }, 00:18:33.620 { 00:18:33.620 "name": "BaseBdev2", 00:18:33.620 "uuid": "f2180e73-818a-5555-bc01-81200f86f4cc", 00:18:33.620 "is_configured": true, 00:18:33.620 "data_offset": 256, 00:18:33.620 "data_size": 7936 00:18:33.620 } 00:18:33.620 ] 00:18:33.620 }' 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87091 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87091 ']' 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87091 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.620 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87091 00:18:33.880 killing process with pid 87091 00:18:33.880 Received shutdown signal, test time was about 60.000000 seconds 00:18:33.880 00:18:33.880 Latency(us) 00:18:33.880 [2024-12-06T18:15:46.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.880 [2024-12-06T18:15:46.048Z] =================================================================================================================== 00:18:33.880 [2024-12-06T18:15:46.048Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.880 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.880 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.880 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87091' 00:18:33.880 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87091 00:18:33.880 [2024-12-06 18:15:45.789447] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.880 [2024-12-06 18:15:45.789609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.880 [2024-12-06 18:15:45.789665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.880 [2024-12-06 18:15:45.789678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:33.881 18:15:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87091 00:18:34.140 [2024-12-06 18:15:46.096912] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:35.076 18:15:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:35.076 00:18:35.076 real 0m20.044s 00:18:35.076 user 0m26.187s 00:18:35.076 sys 0m2.663s 00:18:35.076 18:15:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.076 18:15:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.076 ************************************ 00:18:35.076 END TEST raid_rebuild_test_sb_4k 00:18:35.076 ************************************ 00:18:35.336 18:15:47 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:35.336 18:15:47 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:35.336 18:15:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:35.336 18:15:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.336 18:15:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:35.336 ************************************ 00:18:35.336 START TEST raid_state_function_test_sb_md_separate 00:18:35.336 ************************************ 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87785 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87785' 00:18:35.336 Process raid pid: 87785 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87785 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87785 ']' 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.336 18:15:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.336 [2024-12-06 18:15:47.385401] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:18:35.336 [2024-12-06 18:15:47.385515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.596 [2024-12-06 18:15:47.555057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.596 [2024-12-06 18:15:47.675412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.856 [2024-12-06 18:15:47.880399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.856 [2024-12-06 18:15:47.880444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.115 [2024-12-06 18:15:48.252083] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:36.115 [2024-12-06 18:15:48.252140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:36.115 [2024-12-06 18:15:48.252151] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:36.115 [2024-12-06 18:15:48.252161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.115 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.375 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.375 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.375 "name": "Existed_Raid", 00:18:36.375 "uuid": "3f8740c2-2313-4d0c-a38a-1b5933c94724", 00:18:36.375 "strip_size_kb": 0, 00:18:36.375 "state": "configuring", 00:18:36.375 "raid_level": "raid1", 00:18:36.375 "superblock": true, 00:18:36.375 "num_base_bdevs": 2, 00:18:36.375 "num_base_bdevs_discovered": 0, 00:18:36.375 "num_base_bdevs_operational": 2, 00:18:36.375 "base_bdevs_list": [ 00:18:36.375 { 00:18:36.375 "name": "BaseBdev1", 00:18:36.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.375 "is_configured": false, 00:18:36.375 "data_offset": 0, 00:18:36.375 "data_size": 0 00:18:36.375 }, 00:18:36.375 { 00:18:36.375 "name": "BaseBdev2", 00:18:36.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.375 "is_configured": false, 00:18:36.375 "data_offset": 0, 00:18:36.375 "data_size": 0 00:18:36.375 } 00:18:36.375 ] 00:18:36.375 }' 00:18:36.375 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.375 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.635 [2024-12-06 18:15:48.623470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:36.635 [2024-12-06 18:15:48.623511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.635 [2024-12-06 18:15:48.635445] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:36.635 [2024-12-06 18:15:48.635500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:36.635 [2024-12-06 18:15:48.635511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:36.635 [2024-12-06 18:15:48.635522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.635 [2024-12-06 18:15:48.683455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.635 BaseBdev1 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.635 [ 00:18:36.635 { 00:18:36.635 "name": "BaseBdev1", 00:18:36.635 "aliases": [ 00:18:36.635 "94112010-0787-469c-81ba-66b8e829c053" 00:18:36.635 ], 00:18:36.635 "product_name": "Malloc disk", 00:18:36.635 "block_size": 4096, 00:18:36.635 "num_blocks": 8192, 00:18:36.635 "uuid": "94112010-0787-469c-81ba-66b8e829c053", 00:18:36.635 "md_size": 32, 00:18:36.635 "md_interleave": false, 00:18:36.635 "dif_type": 0, 00:18:36.635 "assigned_rate_limits": { 00:18:36.635 "rw_ios_per_sec": 0, 00:18:36.635 "rw_mbytes_per_sec": 0, 00:18:36.635 "r_mbytes_per_sec": 0, 00:18:36.635 "w_mbytes_per_sec": 0 00:18:36.635 }, 00:18:36.635 "claimed": true, 00:18:36.635 "claim_type": "exclusive_write", 00:18:36.635 "zoned": false, 00:18:36.635 "supported_io_types": { 00:18:36.635 "read": true, 00:18:36.635 "write": true, 00:18:36.635 "unmap": true, 00:18:36.635 "flush": true, 00:18:36.635 "reset": true, 00:18:36.635 "nvme_admin": false, 00:18:36.635 "nvme_io": false, 00:18:36.635 "nvme_io_md": false, 00:18:36.635 "write_zeroes": true, 00:18:36.635 "zcopy": true, 00:18:36.635 "get_zone_info": false, 00:18:36.635 "zone_management": false, 00:18:36.635 "zone_append": false, 00:18:36.635 "compare": false, 00:18:36.635 "compare_and_write": false, 00:18:36.635 "abort": true, 00:18:36.635 "seek_hole": false, 00:18:36.635 "seek_data": false, 00:18:36.635 "copy": true, 00:18:36.635 "nvme_iov_md": false 00:18:36.635 }, 00:18:36.635 "memory_domains": [ 00:18:36.635 { 00:18:36.635 "dma_device_id": "system", 00:18:36.635 "dma_device_type": 1 00:18:36.635 }, 00:18:36.635 { 00:18:36.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.635 "dma_device_type": 2 00:18:36.635 } 00:18:36.635 ], 00:18:36.635 "driver_specific": {} 00:18:36.635 } 00:18:36.635 ] 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.635 "name": "Existed_Raid", 00:18:36.635 "uuid": "84689594-63de-45cd-a7d0-d341b31b7c31", 00:18:36.635 "strip_size_kb": 0, 00:18:36.635 "state": "configuring", 00:18:36.635 "raid_level": "raid1", 00:18:36.635 "superblock": true, 00:18:36.635 "num_base_bdevs": 2, 00:18:36.635 "num_base_bdevs_discovered": 1, 00:18:36.635 "num_base_bdevs_operational": 2, 00:18:36.635 "base_bdevs_list": [ 00:18:36.635 { 00:18:36.635 "name": "BaseBdev1", 00:18:36.635 "uuid": "94112010-0787-469c-81ba-66b8e829c053", 00:18:36.635 "is_configured": true, 00:18:36.635 "data_offset": 256, 00:18:36.635 "data_size": 7936 00:18:36.635 }, 00:18:36.635 { 00:18:36.635 "name": "BaseBdev2", 00:18:36.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.635 "is_configured": false, 00:18:36.635 "data_offset": 0, 00:18:36.635 "data_size": 0 00:18:36.635 } 00:18:36.635 ] 00:18:36.635 }' 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.635 18:15:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.206 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:37.206 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.206 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.206 [2024-12-06 18:15:49.226637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:37.206 [2024-12-06 18:15:49.226700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:37.206 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.206 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:37.206 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.207 [2024-12-06 18:15:49.238664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:37.207 [2024-12-06 18:15:49.240731] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:37.207 [2024-12-06 18:15:49.240953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.207 "name": "Existed_Raid", 00:18:37.207 "uuid": "87fedb75-a766-473e-a110-e647ecbb34d2", 00:18:37.207 "strip_size_kb": 0, 00:18:37.207 "state": "configuring", 00:18:37.207 "raid_level": "raid1", 00:18:37.207 "superblock": true, 00:18:37.207 "num_base_bdevs": 2, 00:18:37.207 "num_base_bdevs_discovered": 1, 00:18:37.207 "num_base_bdevs_operational": 2, 00:18:37.207 "base_bdevs_list": [ 00:18:37.207 { 00:18:37.207 "name": "BaseBdev1", 00:18:37.207 "uuid": "94112010-0787-469c-81ba-66b8e829c053", 00:18:37.207 "is_configured": true, 00:18:37.207 "data_offset": 256, 00:18:37.207 "data_size": 7936 00:18:37.207 }, 00:18:37.207 { 00:18:37.207 "name": "BaseBdev2", 00:18:37.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.207 "is_configured": false, 00:18:37.207 "data_offset": 0, 00:18:37.207 "data_size": 0 00:18:37.207 } 00:18:37.207 ] 00:18:37.207 }' 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.207 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.776 [2024-12-06 18:15:49.742843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.776 [2024-12-06 18:15:49.743099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:37.776 [2024-12-06 18:15:49.743117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:37.776 [2024-12-06 18:15:49.743205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:37.776 [2024-12-06 18:15:49.743377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:37.776 [2024-12-06 18:15:49.743391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:37.776 [2024-12-06 18:15:49.743492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.776 BaseBdev2 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.776 [ 00:18:37.776 { 00:18:37.776 "name": "BaseBdev2", 00:18:37.776 "aliases": [ 00:18:37.776 "484300b0-4801-4af8-92df-e9632ebfeb38" 00:18:37.776 ], 00:18:37.776 "product_name": "Malloc disk", 00:18:37.776 "block_size": 4096, 00:18:37.776 "num_blocks": 8192, 00:18:37.776 "uuid": "484300b0-4801-4af8-92df-e9632ebfeb38", 00:18:37.776 "md_size": 32, 00:18:37.776 "md_interleave": false, 00:18:37.776 "dif_type": 0, 00:18:37.776 "assigned_rate_limits": { 00:18:37.776 "rw_ios_per_sec": 0, 00:18:37.776 "rw_mbytes_per_sec": 0, 00:18:37.776 "r_mbytes_per_sec": 0, 00:18:37.776 "w_mbytes_per_sec": 0 00:18:37.776 }, 00:18:37.776 "claimed": true, 00:18:37.776 "claim_type": "exclusive_write", 00:18:37.776 "zoned": false, 00:18:37.776 "supported_io_types": { 00:18:37.776 "read": true, 00:18:37.776 "write": true, 00:18:37.776 "unmap": true, 00:18:37.776 "flush": true, 00:18:37.776 "reset": true, 00:18:37.776 "nvme_admin": false, 00:18:37.776 "nvme_io": false, 00:18:37.776 "nvme_io_md": false, 00:18:37.776 "write_zeroes": true, 00:18:37.776 "zcopy": true, 00:18:37.776 "get_zone_info": false, 00:18:37.776 "zone_management": false, 00:18:37.776 "zone_append": false, 00:18:37.776 "compare": false, 00:18:37.776 "compare_and_write": false, 00:18:37.776 "abort": true, 00:18:37.776 "seek_hole": false, 00:18:37.776 "seek_data": false, 00:18:37.776 "copy": true, 00:18:37.776 "nvme_iov_md": false 00:18:37.776 }, 00:18:37.776 "memory_domains": [ 00:18:37.776 { 00:18:37.776 "dma_device_id": "system", 00:18:37.776 "dma_device_type": 1 00:18:37.776 }, 00:18:37.776 { 00:18:37.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.776 "dma_device_type": 2 00:18:37.776 } 00:18:37.776 ], 00:18:37.776 "driver_specific": {} 00:18:37.776 } 00:18:37.776 ] 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.776 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.776 "name": "Existed_Raid", 00:18:37.776 "uuid": "87fedb75-a766-473e-a110-e647ecbb34d2", 00:18:37.776 "strip_size_kb": 0, 00:18:37.776 "state": "online", 00:18:37.776 "raid_level": "raid1", 00:18:37.776 "superblock": true, 00:18:37.776 "num_base_bdevs": 2, 00:18:37.776 "num_base_bdevs_discovered": 2, 00:18:37.776 "num_base_bdevs_operational": 2, 00:18:37.776 "base_bdevs_list": [ 00:18:37.776 { 00:18:37.776 "name": "BaseBdev1", 00:18:37.776 "uuid": "94112010-0787-469c-81ba-66b8e829c053", 00:18:37.776 "is_configured": true, 00:18:37.776 "data_offset": 256, 00:18:37.776 "data_size": 7936 00:18:37.776 }, 00:18:37.776 { 00:18:37.776 "name": "BaseBdev2", 00:18:37.776 "uuid": "484300b0-4801-4af8-92df-e9632ebfeb38", 00:18:37.776 "is_configured": true, 00:18:37.776 "data_offset": 256, 00:18:37.777 "data_size": 7936 00:18:37.777 } 00:18:37.777 ] 00:18:37.777 }' 00:18:37.777 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.777 18:15:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:38.341 [2024-12-06 18:15:50.262336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.341 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:38.341 "name": "Existed_Raid", 00:18:38.341 "aliases": [ 00:18:38.341 "87fedb75-a766-473e-a110-e647ecbb34d2" 00:18:38.341 ], 00:18:38.341 "product_name": "Raid Volume", 00:18:38.341 "block_size": 4096, 00:18:38.341 "num_blocks": 7936, 00:18:38.341 "uuid": "87fedb75-a766-473e-a110-e647ecbb34d2", 00:18:38.341 "md_size": 32, 00:18:38.341 "md_interleave": false, 00:18:38.341 "dif_type": 0, 00:18:38.341 "assigned_rate_limits": { 00:18:38.341 "rw_ios_per_sec": 0, 00:18:38.341 "rw_mbytes_per_sec": 0, 00:18:38.341 "r_mbytes_per_sec": 0, 00:18:38.341 "w_mbytes_per_sec": 0 00:18:38.341 }, 00:18:38.341 "claimed": false, 00:18:38.341 "zoned": false, 00:18:38.341 "supported_io_types": { 00:18:38.341 "read": true, 00:18:38.341 "write": true, 00:18:38.341 "unmap": false, 00:18:38.341 "flush": false, 00:18:38.341 "reset": true, 00:18:38.341 "nvme_admin": false, 00:18:38.341 "nvme_io": false, 00:18:38.341 "nvme_io_md": false, 00:18:38.341 "write_zeroes": true, 00:18:38.341 "zcopy": false, 00:18:38.341 "get_zone_info": false, 00:18:38.341 "zone_management": false, 00:18:38.341 "zone_append": false, 00:18:38.341 "compare": false, 00:18:38.341 "compare_and_write": false, 00:18:38.341 "abort": false, 00:18:38.341 "seek_hole": false, 00:18:38.341 "seek_data": false, 00:18:38.341 "copy": false, 00:18:38.341 "nvme_iov_md": false 00:18:38.341 }, 00:18:38.341 "memory_domains": [ 00:18:38.341 { 00:18:38.341 "dma_device_id": "system", 00:18:38.341 "dma_device_type": 1 00:18:38.341 }, 00:18:38.341 { 00:18:38.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.341 "dma_device_type": 2 00:18:38.341 }, 00:18:38.341 { 00:18:38.341 "dma_device_id": "system", 00:18:38.341 "dma_device_type": 1 00:18:38.341 }, 00:18:38.341 { 00:18:38.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.341 "dma_device_type": 2 00:18:38.341 } 00:18:38.341 ], 00:18:38.341 "driver_specific": { 00:18:38.341 "raid": { 00:18:38.341 "uuid": "87fedb75-a766-473e-a110-e647ecbb34d2", 00:18:38.341 "strip_size_kb": 0, 00:18:38.341 "state": "online", 00:18:38.341 "raid_level": "raid1", 00:18:38.341 "superblock": true, 00:18:38.341 "num_base_bdevs": 2, 00:18:38.341 "num_base_bdevs_discovered": 2, 00:18:38.341 "num_base_bdevs_operational": 2, 00:18:38.341 "base_bdevs_list": [ 00:18:38.341 { 00:18:38.341 "name": "BaseBdev1", 00:18:38.342 "uuid": "94112010-0787-469c-81ba-66b8e829c053", 00:18:38.342 "is_configured": true, 00:18:38.342 "data_offset": 256, 00:18:38.342 "data_size": 7936 00:18:38.342 }, 00:18:38.342 { 00:18:38.342 "name": "BaseBdev2", 00:18:38.342 "uuid": "484300b0-4801-4af8-92df-e9632ebfeb38", 00:18:38.342 "is_configured": true, 00:18:38.342 "data_offset": 256, 00:18:38.342 "data_size": 7936 00:18:38.342 } 00:18:38.342 ] 00:18:38.342 } 00:18:38.342 } 00:18:38.342 }' 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:38.342 BaseBdev2' 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.342 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.342 [2024-12-06 18:15:50.485703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.600 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.600 "name": "Existed_Raid", 00:18:38.600 "uuid": "87fedb75-a766-473e-a110-e647ecbb34d2", 00:18:38.600 "strip_size_kb": 0, 00:18:38.600 "state": "online", 00:18:38.600 "raid_level": "raid1", 00:18:38.600 "superblock": true, 00:18:38.600 "num_base_bdevs": 2, 00:18:38.600 "num_base_bdevs_discovered": 1, 00:18:38.601 "num_base_bdevs_operational": 1, 00:18:38.601 "base_bdevs_list": [ 00:18:38.601 { 00:18:38.601 "name": null, 00:18:38.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.601 "is_configured": false, 00:18:38.601 "data_offset": 0, 00:18:38.601 "data_size": 7936 00:18:38.601 }, 00:18:38.601 { 00:18:38.601 "name": "BaseBdev2", 00:18:38.601 "uuid": "484300b0-4801-4af8-92df-e9632ebfeb38", 00:18:38.601 "is_configured": true, 00:18:38.601 "data_offset": 256, 00:18:38.601 "data_size": 7936 00:18:38.601 } 00:18:38.601 ] 00:18:38.601 }' 00:18:38.601 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.601 18:15:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.858 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:38.858 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:38.858 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.858 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.858 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.859 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:38.859 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.118 [2024-12-06 18:15:51.062453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:39.118 [2024-12-06 18:15:51.062566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.118 [2024-12-06 18:15:51.165782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.118 [2024-12-06 18:15:51.165832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.118 [2024-12-06 18:15:51.165845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87785 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87785 ']' 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87785 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87785 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.118 killing process with pid 87785 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87785' 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87785 00:18:39.118 [2024-12-06 18:15:51.258790] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:39.118 18:15:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87785 00:18:39.118 [2024-12-06 18:15:51.275497] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.497 18:15:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:40.497 00:18:40.497 real 0m5.129s 00:18:40.497 user 0m7.371s 00:18:40.497 sys 0m0.880s 00:18:40.497 18:15:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.497 18:15:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.497 ************************************ 00:18:40.497 END TEST raid_state_function_test_sb_md_separate 00:18:40.497 ************************************ 00:18:40.497 18:15:52 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:40.497 18:15:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:40.497 18:15:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.497 18:15:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:40.497 ************************************ 00:18:40.497 START TEST raid_superblock_test_md_separate 00:18:40.497 ************************************ 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88035 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88035 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88035 ']' 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.497 18:15:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.497 [2024-12-06 18:15:52.580419] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:18:40.497 [2024-12-06 18:15:52.581096] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88035 ] 00:18:40.757 [2024-12-06 18:15:52.738776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.757 [2024-12-06 18:15:52.854163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.016 [2024-12-06 18:15:53.052629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.016 [2024-12-06 18:15:53.052761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.276 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.536 malloc1 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.536 [2024-12-06 18:15:53.461898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:41.536 [2024-12-06 18:15:53.461953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.536 [2024-12-06 18:15:53.461991] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:41.536 [2024-12-06 18:15:53.462000] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.536 [2024-12-06 18:15:53.463844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.536 [2024-12-06 18:15:53.463881] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:41.536 pt1 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.536 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.537 malloc2 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.537 [2024-12-06 18:15:53.516651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.537 [2024-12-06 18:15:53.516761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.537 [2024-12-06 18:15:53.516799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:41.537 [2024-12-06 18:15:53.516842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.537 [2024-12-06 18:15:53.518677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.537 [2024-12-06 18:15:53.518762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.537 pt2 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.537 [2024-12-06 18:15:53.528656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:41.537 [2024-12-06 18:15:53.530443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.537 [2024-12-06 18:15:53.530688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:41.537 [2024-12-06 18:15:53.530738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:41.537 [2024-12-06 18:15:53.530827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:41.537 [2024-12-06 18:15:53.530979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:41.537 [2024-12-06 18:15:53.531020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:41.537 [2024-12-06 18:15:53.531166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.537 "name": "raid_bdev1", 00:18:41.537 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:41.537 "strip_size_kb": 0, 00:18:41.537 "state": "online", 00:18:41.537 "raid_level": "raid1", 00:18:41.537 "superblock": true, 00:18:41.537 "num_base_bdevs": 2, 00:18:41.537 "num_base_bdevs_discovered": 2, 00:18:41.537 "num_base_bdevs_operational": 2, 00:18:41.537 "base_bdevs_list": [ 00:18:41.537 { 00:18:41.537 "name": "pt1", 00:18:41.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.537 "is_configured": true, 00:18:41.537 "data_offset": 256, 00:18:41.537 "data_size": 7936 00:18:41.537 }, 00:18:41.537 { 00:18:41.537 "name": "pt2", 00:18:41.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.537 "is_configured": true, 00:18:41.537 "data_offset": 256, 00:18:41.537 "data_size": 7936 00:18:41.537 } 00:18:41.537 ] 00:18:41.537 }' 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.537 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:42.106 [2024-12-06 18:15:53.980257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.106 18:15:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.106 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:42.106 "name": "raid_bdev1", 00:18:42.106 "aliases": [ 00:18:42.106 "d389b024-c10d-4039-a7a3-771132e8afb4" 00:18:42.106 ], 00:18:42.106 "product_name": "Raid Volume", 00:18:42.106 "block_size": 4096, 00:18:42.106 "num_blocks": 7936, 00:18:42.106 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:42.106 "md_size": 32, 00:18:42.106 "md_interleave": false, 00:18:42.106 "dif_type": 0, 00:18:42.106 "assigned_rate_limits": { 00:18:42.106 "rw_ios_per_sec": 0, 00:18:42.106 "rw_mbytes_per_sec": 0, 00:18:42.106 "r_mbytes_per_sec": 0, 00:18:42.106 "w_mbytes_per_sec": 0 00:18:42.106 }, 00:18:42.106 "claimed": false, 00:18:42.106 "zoned": false, 00:18:42.107 "supported_io_types": { 00:18:42.107 "read": true, 00:18:42.107 "write": true, 00:18:42.107 "unmap": false, 00:18:42.107 "flush": false, 00:18:42.107 "reset": true, 00:18:42.107 "nvme_admin": false, 00:18:42.107 "nvme_io": false, 00:18:42.107 "nvme_io_md": false, 00:18:42.107 "write_zeroes": true, 00:18:42.107 "zcopy": false, 00:18:42.107 "get_zone_info": false, 00:18:42.107 "zone_management": false, 00:18:42.107 "zone_append": false, 00:18:42.107 "compare": false, 00:18:42.107 "compare_and_write": false, 00:18:42.107 "abort": false, 00:18:42.107 "seek_hole": false, 00:18:42.107 "seek_data": false, 00:18:42.107 "copy": false, 00:18:42.107 "nvme_iov_md": false 00:18:42.107 }, 00:18:42.107 "memory_domains": [ 00:18:42.107 { 00:18:42.107 "dma_device_id": "system", 00:18:42.107 "dma_device_type": 1 00:18:42.107 }, 00:18:42.107 { 00:18:42.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.107 "dma_device_type": 2 00:18:42.107 }, 00:18:42.107 { 00:18:42.107 "dma_device_id": "system", 00:18:42.107 "dma_device_type": 1 00:18:42.107 }, 00:18:42.107 { 00:18:42.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.107 "dma_device_type": 2 00:18:42.107 } 00:18:42.107 ], 00:18:42.107 "driver_specific": { 00:18:42.107 "raid": { 00:18:42.107 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:42.107 "strip_size_kb": 0, 00:18:42.107 "state": "online", 00:18:42.107 "raid_level": "raid1", 00:18:42.107 "superblock": true, 00:18:42.107 "num_base_bdevs": 2, 00:18:42.107 "num_base_bdevs_discovered": 2, 00:18:42.107 "num_base_bdevs_operational": 2, 00:18:42.107 "base_bdevs_list": [ 00:18:42.107 { 00:18:42.107 "name": "pt1", 00:18:42.107 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.107 "is_configured": true, 00:18:42.107 "data_offset": 256, 00:18:42.107 "data_size": 7936 00:18:42.107 }, 00:18:42.107 { 00:18:42.107 "name": "pt2", 00:18:42.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.107 "is_configured": true, 00:18:42.107 "data_offset": 256, 00:18:42.107 "data_size": 7936 00:18:42.107 } 00:18:42.107 ] 00:18:42.107 } 00:18:42.107 } 00:18:42.107 }' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:42.107 pt2' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.107 [2024-12-06 18:15:54.179850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d389b024-c10d-4039-a7a3-771132e8afb4 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z d389b024-c10d-4039-a7a3-771132e8afb4 ']' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.107 [2024-12-06 18:15:54.223459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:42.107 [2024-12-06 18:15:54.223522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.107 [2024-12-06 18:15:54.223631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.107 [2024-12-06 18:15:54.223691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.107 [2024-12-06 18:15:54.223702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.107 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.367 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.367 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.368 [2024-12-06 18:15:54.343272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:42.368 [2024-12-06 18:15:54.345156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:42.368 [2024-12-06 18:15:54.345275] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:42.368 [2024-12-06 18:15:54.345362] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:42.368 [2024-12-06 18:15:54.345413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:42.368 [2024-12-06 18:15:54.345436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:42.368 request: 00:18:42.368 { 00:18:42.368 "name": "raid_bdev1", 00:18:42.368 "raid_level": "raid1", 00:18:42.368 "base_bdevs": [ 00:18:42.368 "malloc1", 00:18:42.368 "malloc2" 00:18:42.368 ], 00:18:42.368 "superblock": false, 00:18:42.368 "method": "bdev_raid_create", 00:18:42.368 "req_id": 1 00:18:42.368 } 00:18:42.368 Got JSON-RPC error response 00:18:42.368 response: 00:18:42.368 { 00:18:42.368 "code": -17, 00:18:42.368 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:42.368 } 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.368 [2024-12-06 18:15:54.407160] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:42.368 [2024-12-06 18:15:54.407207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.368 [2024-12-06 18:15:54.407222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:42.368 [2024-12-06 18:15:54.407232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.368 [2024-12-06 18:15:54.409162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.368 [2024-12-06 18:15:54.409199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:42.368 [2024-12-06 18:15:54.409244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:42.368 [2024-12-06 18:15:54.409304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:42.368 pt1 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.368 "name": "raid_bdev1", 00:18:42.368 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:42.368 "strip_size_kb": 0, 00:18:42.368 "state": "configuring", 00:18:42.368 "raid_level": "raid1", 00:18:42.368 "superblock": true, 00:18:42.368 "num_base_bdevs": 2, 00:18:42.368 "num_base_bdevs_discovered": 1, 00:18:42.368 "num_base_bdevs_operational": 2, 00:18:42.368 "base_bdevs_list": [ 00:18:42.368 { 00:18:42.368 "name": "pt1", 00:18:42.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.368 "is_configured": true, 00:18:42.368 "data_offset": 256, 00:18:42.368 "data_size": 7936 00:18:42.368 }, 00:18:42.368 { 00:18:42.368 "name": null, 00:18:42.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.368 "is_configured": false, 00:18:42.368 "data_offset": 256, 00:18:42.368 "data_size": 7936 00:18:42.368 } 00:18:42.368 ] 00:18:42.368 }' 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.368 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.938 [2024-12-06 18:15:54.858405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.938 [2024-12-06 18:15:54.858562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.938 [2024-12-06 18:15:54.858604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:42.938 [2024-12-06 18:15:54.858635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.938 [2024-12-06 18:15:54.858896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.938 [2024-12-06 18:15:54.858952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.938 [2024-12-06 18:15:54.859044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:42.938 [2024-12-06 18:15:54.859107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.938 [2024-12-06 18:15:54.859255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:42.938 [2024-12-06 18:15:54.859293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:42.938 [2024-12-06 18:15:54.859395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:42.938 [2024-12-06 18:15:54.859543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:42.938 [2024-12-06 18:15:54.859610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:42.938 [2024-12-06 18:15:54.859744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.938 pt2 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.938 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.938 "name": "raid_bdev1", 00:18:42.938 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:42.938 "strip_size_kb": 0, 00:18:42.938 "state": "online", 00:18:42.938 "raid_level": "raid1", 00:18:42.938 "superblock": true, 00:18:42.938 "num_base_bdevs": 2, 00:18:42.938 "num_base_bdevs_discovered": 2, 00:18:42.938 "num_base_bdevs_operational": 2, 00:18:42.938 "base_bdevs_list": [ 00:18:42.938 { 00:18:42.938 "name": "pt1", 00:18:42.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.938 "is_configured": true, 00:18:42.938 "data_offset": 256, 00:18:42.938 "data_size": 7936 00:18:42.938 }, 00:18:42.938 { 00:18:42.938 "name": "pt2", 00:18:42.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.938 "is_configured": true, 00:18:42.938 "data_offset": 256, 00:18:42.938 "data_size": 7936 00:18:42.938 } 00:18:42.938 ] 00:18:42.938 }' 00:18:42.939 18:15:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.939 18:15:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.199 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:43.199 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:43.199 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:43.199 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:43.199 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:43.199 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:43.199 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:43.199 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.199 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.200 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:43.200 [2024-12-06 18:15:55.305929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.200 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.200 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.200 "name": "raid_bdev1", 00:18:43.200 "aliases": [ 00:18:43.200 "d389b024-c10d-4039-a7a3-771132e8afb4" 00:18:43.200 ], 00:18:43.200 "product_name": "Raid Volume", 00:18:43.200 "block_size": 4096, 00:18:43.200 "num_blocks": 7936, 00:18:43.200 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:43.200 "md_size": 32, 00:18:43.200 "md_interleave": false, 00:18:43.200 "dif_type": 0, 00:18:43.200 "assigned_rate_limits": { 00:18:43.200 "rw_ios_per_sec": 0, 00:18:43.200 "rw_mbytes_per_sec": 0, 00:18:43.200 "r_mbytes_per_sec": 0, 00:18:43.200 "w_mbytes_per_sec": 0 00:18:43.200 }, 00:18:43.200 "claimed": false, 00:18:43.200 "zoned": false, 00:18:43.200 "supported_io_types": { 00:18:43.200 "read": true, 00:18:43.200 "write": true, 00:18:43.200 "unmap": false, 00:18:43.200 "flush": false, 00:18:43.200 "reset": true, 00:18:43.200 "nvme_admin": false, 00:18:43.200 "nvme_io": false, 00:18:43.200 "nvme_io_md": false, 00:18:43.200 "write_zeroes": true, 00:18:43.200 "zcopy": false, 00:18:43.200 "get_zone_info": false, 00:18:43.200 "zone_management": false, 00:18:43.200 "zone_append": false, 00:18:43.200 "compare": false, 00:18:43.200 "compare_and_write": false, 00:18:43.200 "abort": false, 00:18:43.200 "seek_hole": false, 00:18:43.200 "seek_data": false, 00:18:43.200 "copy": false, 00:18:43.200 "nvme_iov_md": false 00:18:43.200 }, 00:18:43.200 "memory_domains": [ 00:18:43.200 { 00:18:43.200 "dma_device_id": "system", 00:18:43.200 "dma_device_type": 1 00:18:43.200 }, 00:18:43.200 { 00:18:43.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.200 "dma_device_type": 2 00:18:43.200 }, 00:18:43.200 { 00:18:43.200 "dma_device_id": "system", 00:18:43.200 "dma_device_type": 1 00:18:43.200 }, 00:18:43.200 { 00:18:43.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.200 "dma_device_type": 2 00:18:43.200 } 00:18:43.200 ], 00:18:43.200 "driver_specific": { 00:18:43.200 "raid": { 00:18:43.200 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:43.200 "strip_size_kb": 0, 00:18:43.200 "state": "online", 00:18:43.200 "raid_level": "raid1", 00:18:43.200 "superblock": true, 00:18:43.200 "num_base_bdevs": 2, 00:18:43.200 "num_base_bdevs_discovered": 2, 00:18:43.200 "num_base_bdevs_operational": 2, 00:18:43.200 "base_bdevs_list": [ 00:18:43.200 { 00:18:43.200 "name": "pt1", 00:18:43.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:43.200 "is_configured": true, 00:18:43.200 "data_offset": 256, 00:18:43.200 "data_size": 7936 00:18:43.200 }, 00:18:43.200 { 00:18:43.200 "name": "pt2", 00:18:43.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.200 "is_configured": true, 00:18:43.200 "data_offset": 256, 00:18:43.200 "data_size": 7936 00:18:43.200 } 00:18:43.200 ] 00:18:43.200 } 00:18:43.200 } 00:18:43.200 }' 00:18:43.200 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:43.459 pt2' 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.459 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.459 [2024-12-06 18:15:55.521514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' d389b024-c10d-4039-a7a3-771132e8afb4 '!=' d389b024-c10d-4039-a7a3-771132e8afb4 ']' 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.460 [2024-12-06 18:15:55.549245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.460 "name": "raid_bdev1", 00:18:43.460 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:43.460 "strip_size_kb": 0, 00:18:43.460 "state": "online", 00:18:43.460 "raid_level": "raid1", 00:18:43.460 "superblock": true, 00:18:43.460 "num_base_bdevs": 2, 00:18:43.460 "num_base_bdevs_discovered": 1, 00:18:43.460 "num_base_bdevs_operational": 1, 00:18:43.460 "base_bdevs_list": [ 00:18:43.460 { 00:18:43.460 "name": null, 00:18:43.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.460 "is_configured": false, 00:18:43.460 "data_offset": 0, 00:18:43.460 "data_size": 7936 00:18:43.460 }, 00:18:43.460 { 00:18:43.460 "name": "pt2", 00:18:43.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.460 "is_configured": true, 00:18:43.460 "data_offset": 256, 00:18:43.460 "data_size": 7936 00:18:43.460 } 00:18:43.460 ] 00:18:43.460 }' 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.460 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.028 [2024-12-06 18:15:55.960510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.028 [2024-12-06 18:15:55.960585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.028 [2024-12-06 18:15:55.960690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.028 [2024-12-06 18:15:55.960758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.028 [2024-12-06 18:15:55.960835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:44.028 18:15:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.028 [2024-12-06 18:15:56.020387] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:44.028 [2024-12-06 18:15:56.020442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.028 [2024-12-06 18:15:56.020459] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:44.028 [2024-12-06 18:15:56.020469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.028 [2024-12-06 18:15:56.022435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.028 [2024-12-06 18:15:56.022518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:44.028 [2024-12-06 18:15:56.022575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:44.028 [2024-12-06 18:15:56.022636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:44.028 [2024-12-06 18:15:56.022722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:44.028 [2024-12-06 18:15:56.022735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:44.028 [2024-12-06 18:15:56.022809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:44.028 [2024-12-06 18:15:56.022931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:44.028 [2024-12-06 18:15:56.022939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:44.028 [2024-12-06 18:15:56.023050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.028 pt2 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.028 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.029 "name": "raid_bdev1", 00:18:44.029 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:44.029 "strip_size_kb": 0, 00:18:44.029 "state": "online", 00:18:44.029 "raid_level": "raid1", 00:18:44.029 "superblock": true, 00:18:44.029 "num_base_bdevs": 2, 00:18:44.029 "num_base_bdevs_discovered": 1, 00:18:44.029 "num_base_bdevs_operational": 1, 00:18:44.029 "base_bdevs_list": [ 00:18:44.029 { 00:18:44.029 "name": null, 00:18:44.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.029 "is_configured": false, 00:18:44.029 "data_offset": 256, 00:18:44.029 "data_size": 7936 00:18:44.029 }, 00:18:44.029 { 00:18:44.029 "name": "pt2", 00:18:44.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.029 "is_configured": true, 00:18:44.029 "data_offset": 256, 00:18:44.029 "data_size": 7936 00:18:44.029 } 00:18:44.029 ] 00:18:44.029 }' 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.029 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.288 [2024-12-06 18:15:56.415733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.288 [2024-12-06 18:15:56.415813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.288 [2024-12-06 18:15:56.415916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.288 [2024-12-06 18:15:56.415989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.288 [2024-12-06 18:15:56.416056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.288 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.547 [2024-12-06 18:15:56.455711] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:44.547 [2024-12-06 18:15:56.455809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.547 [2024-12-06 18:15:56.455846] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:44.547 [2024-12-06 18:15:56.455873] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.547 [2024-12-06 18:15:56.457893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.547 [2024-12-06 18:15:56.457961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:44.547 [2024-12-06 18:15:56.458056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:44.547 [2024-12-06 18:15:56.458138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:44.547 [2024-12-06 18:15:56.458318] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:44.547 [2024-12-06 18:15:56.458371] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.547 [2024-12-06 18:15:56.458409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:44.547 [2024-12-06 18:15:56.458516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:44.547 [2024-12-06 18:15:56.458626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:44.547 [2024-12-06 18:15:56.458662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:44.547 [2024-12-06 18:15:56.458741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:44.547 [2024-12-06 18:15:56.458881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:44.547 [2024-12-06 18:15:56.458920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:44.547 [2024-12-06 18:15:56.459051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.547 pt1 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.547 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.547 "name": "raid_bdev1", 00:18:44.547 "uuid": "d389b024-c10d-4039-a7a3-771132e8afb4", 00:18:44.547 "strip_size_kb": 0, 00:18:44.547 "state": "online", 00:18:44.547 "raid_level": "raid1", 00:18:44.548 "superblock": true, 00:18:44.548 "num_base_bdevs": 2, 00:18:44.548 "num_base_bdevs_discovered": 1, 00:18:44.548 "num_base_bdevs_operational": 1, 00:18:44.548 "base_bdevs_list": [ 00:18:44.548 { 00:18:44.548 "name": null, 00:18:44.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.548 "is_configured": false, 00:18:44.548 "data_offset": 256, 00:18:44.548 "data_size": 7936 00:18:44.548 }, 00:18:44.548 { 00:18:44.548 "name": "pt2", 00:18:44.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.548 "is_configured": true, 00:18:44.548 "data_offset": 256, 00:18:44.548 "data_size": 7936 00:18:44.548 } 00:18:44.548 ] 00:18:44.548 }' 00:18:44.548 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.548 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:44.807 [2024-12-06 18:15:56.891248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' d389b024-c10d-4039-a7a3-771132e8afb4 '!=' d389b024-c10d-4039-a7a3-771132e8afb4 ']' 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88035 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88035 ']' 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88035 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.807 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88035 00:18:45.067 killing process with pid 88035 00:18:45.067 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.067 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.067 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88035' 00:18:45.067 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88035 00:18:45.067 [2024-12-06 18:15:56.978023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:45.067 [2024-12-06 18:15:56.978121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.067 [2024-12-06 18:15:56.978170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.067 [2024-12-06 18:15:56.978186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:45.067 18:15:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88035 00:18:45.067 [2024-12-06 18:15:57.192488] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.458 18:15:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:46.458 00:18:46.458 real 0m5.791s 00:18:46.458 user 0m8.663s 00:18:46.458 sys 0m1.050s 00:18:46.458 18:15:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.458 ************************************ 00:18:46.458 END TEST raid_superblock_test_md_separate 00:18:46.458 ************************************ 00:18:46.458 18:15:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.458 18:15:58 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:46.458 18:15:58 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:46.458 18:15:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:46.458 18:15:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.458 18:15:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.458 ************************************ 00:18:46.458 START TEST raid_rebuild_test_sb_md_separate 00:18:46.458 ************************************ 00:18:46.458 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:46.458 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:46.458 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:46.458 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:46.458 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:46.458 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:46.458 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:46.458 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88360 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88360 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88360 ']' 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.459 18:15:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.459 [2024-12-06 18:15:58.457189] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:18:46.459 [2024-12-06 18:15:58.457421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88360 ] 00:18:46.459 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:46.459 Zero copy mechanism will not be used. 00:18:46.718 [2024-12-06 18:15:58.633041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.718 [2024-12-06 18:15:58.740825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.978 [2024-12-06 18:15:58.941251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.978 [2024-12-06 18:15:58.941402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.238 BaseBdev1_malloc 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.238 [2024-12-06 18:15:59.328416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:47.238 [2024-12-06 18:15:59.328477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.238 [2024-12-06 18:15:59.328502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:47.238 [2024-12-06 18:15:59.328513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.238 [2024-12-06 18:15:59.330376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.238 [2024-12-06 18:15:59.330483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.238 BaseBdev1 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.238 BaseBdev2_malloc 00:18:47.238 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.239 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:47.239 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.239 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.239 [2024-12-06 18:15:59.381398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:47.239 [2024-12-06 18:15:59.381457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.239 [2024-12-06 18:15:59.381477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:47.239 [2024-12-06 18:15:59.381489] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.239 [2024-12-06 18:15:59.383355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.239 [2024-12-06 18:15:59.383393] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:47.239 BaseBdev2 00:18:47.239 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.239 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:47.239 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.239 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.498 spare_malloc 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.498 spare_delay 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.498 [2024-12-06 18:15:59.477290] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.498 [2024-12-06 18:15:59.477399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.498 [2024-12-06 18:15:59.477425] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:47.498 [2024-12-06 18:15:59.477436] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.498 [2024-12-06 18:15:59.479299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.498 [2024-12-06 18:15:59.479339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.498 spare 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.498 [2024-12-06 18:15:59.489309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.498 [2024-12-06 18:15:59.491025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.498 [2024-12-06 18:15:59.491214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:47.498 [2024-12-06 18:15:59.491230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:47.498 [2024-12-06 18:15:59.491308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:47.498 [2024-12-06 18:15:59.491439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:47.498 [2024-12-06 18:15:59.491451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:47.498 [2024-12-06 18:15:59.491539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.498 "name": "raid_bdev1", 00:18:47.498 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:47.498 "strip_size_kb": 0, 00:18:47.498 "state": "online", 00:18:47.498 "raid_level": "raid1", 00:18:47.498 "superblock": true, 00:18:47.498 "num_base_bdevs": 2, 00:18:47.498 "num_base_bdevs_discovered": 2, 00:18:47.498 "num_base_bdevs_operational": 2, 00:18:47.498 "base_bdevs_list": [ 00:18:47.498 { 00:18:47.498 "name": "BaseBdev1", 00:18:47.498 "uuid": "10f6ab46-002d-5f61-a8de-eae8a0d86eb0", 00:18:47.498 "is_configured": true, 00:18:47.498 "data_offset": 256, 00:18:47.498 "data_size": 7936 00:18:47.498 }, 00:18:47.498 { 00:18:47.498 "name": "BaseBdev2", 00:18:47.498 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:47.498 "is_configured": true, 00:18:47.498 "data_offset": 256, 00:18:47.498 "data_size": 7936 00:18:47.498 } 00:18:47.498 ] 00:18:47.498 }' 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.498 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.759 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:47.759 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.759 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.759 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.759 [2024-12-06 18:15:59.900918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:48.018 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:48.019 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:48.019 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.019 18:15:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:48.019 [2024-12-06 18:16:00.176231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:48.278 /dev/nbd0 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.278 1+0 records in 00:18:48.278 1+0 records out 00:18:48.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304092 s, 13.5 MB/s 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.278 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.279 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:48.279 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:48.279 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:48.847 7936+0 records in 00:18:48.847 7936+0 records out 00:18:48.847 32505856 bytes (33 MB, 31 MiB) copied, 0.593929 s, 54.7 MB/s 00:18:48.847 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:48.847 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.847 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:48.847 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.847 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:48.847 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.847 18:16:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:49.106 [2024-12-06 18:16:01.057158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.106 [2024-12-06 18:16:01.073817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.106 "name": "raid_bdev1", 00:18:49.106 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:49.106 "strip_size_kb": 0, 00:18:49.106 "state": "online", 00:18:49.106 "raid_level": "raid1", 00:18:49.106 "superblock": true, 00:18:49.106 "num_base_bdevs": 2, 00:18:49.106 "num_base_bdevs_discovered": 1, 00:18:49.106 "num_base_bdevs_operational": 1, 00:18:49.106 "base_bdevs_list": [ 00:18:49.106 { 00:18:49.106 "name": null, 00:18:49.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.106 "is_configured": false, 00:18:49.106 "data_offset": 0, 00:18:49.106 "data_size": 7936 00:18:49.106 }, 00:18:49.106 { 00:18:49.106 "name": "BaseBdev2", 00:18:49.106 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:49.106 "is_configured": true, 00:18:49.106 "data_offset": 256, 00:18:49.106 "data_size": 7936 00:18:49.106 } 00:18:49.106 ] 00:18:49.106 }' 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.106 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.364 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:49.364 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.364 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.622 [2024-12-06 18:16:01.533033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.622 [2024-12-06 18:16:01.547859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:49.622 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.622 18:16:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:49.622 [2024-12-06 18:16:01.549693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.558 "name": "raid_bdev1", 00:18:50.558 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:50.558 "strip_size_kb": 0, 00:18:50.558 "state": "online", 00:18:50.558 "raid_level": "raid1", 00:18:50.558 "superblock": true, 00:18:50.558 "num_base_bdevs": 2, 00:18:50.558 "num_base_bdevs_discovered": 2, 00:18:50.558 "num_base_bdevs_operational": 2, 00:18:50.558 "process": { 00:18:50.558 "type": "rebuild", 00:18:50.558 "target": "spare", 00:18:50.558 "progress": { 00:18:50.558 "blocks": 2560, 00:18:50.558 "percent": 32 00:18:50.558 } 00:18:50.558 }, 00:18:50.558 "base_bdevs_list": [ 00:18:50.558 { 00:18:50.558 "name": "spare", 00:18:50.558 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:50.558 "is_configured": true, 00:18:50.558 "data_offset": 256, 00:18:50.558 "data_size": 7936 00:18:50.558 }, 00:18:50.558 { 00:18:50.558 "name": "BaseBdev2", 00:18:50.558 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:50.558 "is_configured": true, 00:18:50.558 "data_offset": 256, 00:18:50.558 "data_size": 7936 00:18:50.558 } 00:18:50.558 ] 00:18:50.558 }' 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.558 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.558 [2024-12-06 18:16:02.689613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.815 [2024-12-06 18:16:02.755168] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.815 [2024-12-06 18:16:02.755291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.815 [2024-12-06 18:16:02.755330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.816 [2024-12-06 18:16:02.755355] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.816 "name": "raid_bdev1", 00:18:50.816 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:50.816 "strip_size_kb": 0, 00:18:50.816 "state": "online", 00:18:50.816 "raid_level": "raid1", 00:18:50.816 "superblock": true, 00:18:50.816 "num_base_bdevs": 2, 00:18:50.816 "num_base_bdevs_discovered": 1, 00:18:50.816 "num_base_bdevs_operational": 1, 00:18:50.816 "base_bdevs_list": [ 00:18:50.816 { 00:18:50.816 "name": null, 00:18:50.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.816 "is_configured": false, 00:18:50.816 "data_offset": 0, 00:18:50.816 "data_size": 7936 00:18:50.816 }, 00:18:50.816 { 00:18:50.816 "name": "BaseBdev2", 00:18:50.816 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:50.816 "is_configured": true, 00:18:50.816 "data_offset": 256, 00:18:50.816 "data_size": 7936 00:18:50.816 } 00:18:50.816 ] 00:18:50.816 }' 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.816 18:16:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.073 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.073 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.073 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.073 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.073 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.073 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.073 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.331 "name": "raid_bdev1", 00:18:51.331 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:51.331 "strip_size_kb": 0, 00:18:51.331 "state": "online", 00:18:51.331 "raid_level": "raid1", 00:18:51.331 "superblock": true, 00:18:51.331 "num_base_bdevs": 2, 00:18:51.331 "num_base_bdevs_discovered": 1, 00:18:51.331 "num_base_bdevs_operational": 1, 00:18:51.331 "base_bdevs_list": [ 00:18:51.331 { 00:18:51.331 "name": null, 00:18:51.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.331 "is_configured": false, 00:18:51.331 "data_offset": 0, 00:18:51.331 "data_size": 7936 00:18:51.331 }, 00:18:51.331 { 00:18:51.331 "name": "BaseBdev2", 00:18:51.331 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:51.331 "is_configured": true, 00:18:51.331 "data_offset": 256, 00:18:51.331 "data_size": 7936 00:18:51.331 } 00:18:51.331 ] 00:18:51.331 }' 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.331 [2024-12-06 18:16:03.350689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.331 [2024-12-06 18:16:03.364448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.331 18:16:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:51.331 [2024-12-06 18:16:03.366240] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.268 "name": "raid_bdev1", 00:18:52.268 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:52.268 "strip_size_kb": 0, 00:18:52.268 "state": "online", 00:18:52.268 "raid_level": "raid1", 00:18:52.268 "superblock": true, 00:18:52.268 "num_base_bdevs": 2, 00:18:52.268 "num_base_bdevs_discovered": 2, 00:18:52.268 "num_base_bdevs_operational": 2, 00:18:52.268 "process": { 00:18:52.268 "type": "rebuild", 00:18:52.268 "target": "spare", 00:18:52.268 "progress": { 00:18:52.268 "blocks": 2560, 00:18:52.268 "percent": 32 00:18:52.268 } 00:18:52.268 }, 00:18:52.268 "base_bdevs_list": [ 00:18:52.268 { 00:18:52.268 "name": "spare", 00:18:52.268 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:52.268 "is_configured": true, 00:18:52.268 "data_offset": 256, 00:18:52.268 "data_size": 7936 00:18:52.268 }, 00:18:52.268 { 00:18:52.268 "name": "BaseBdev2", 00:18:52.268 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:52.268 "is_configured": true, 00:18:52.268 "data_offset": 256, 00:18:52.268 "data_size": 7936 00:18:52.268 } 00:18:52.268 ] 00:18:52.268 }' 00:18:52.268 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:52.532 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=738 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.532 "name": "raid_bdev1", 00:18:52.532 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:52.532 "strip_size_kb": 0, 00:18:52.532 "state": "online", 00:18:52.532 "raid_level": "raid1", 00:18:52.532 "superblock": true, 00:18:52.532 "num_base_bdevs": 2, 00:18:52.532 "num_base_bdevs_discovered": 2, 00:18:52.532 "num_base_bdevs_operational": 2, 00:18:52.532 "process": { 00:18:52.532 "type": "rebuild", 00:18:52.532 "target": "spare", 00:18:52.532 "progress": { 00:18:52.532 "blocks": 2816, 00:18:52.532 "percent": 35 00:18:52.532 } 00:18:52.532 }, 00:18:52.532 "base_bdevs_list": [ 00:18:52.532 { 00:18:52.532 "name": "spare", 00:18:52.532 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:52.532 "is_configured": true, 00:18:52.532 "data_offset": 256, 00:18:52.532 "data_size": 7936 00:18:52.532 }, 00:18:52.532 { 00:18:52.532 "name": "BaseBdev2", 00:18:52.532 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:52.532 "is_configured": true, 00:18:52.532 "data_offset": 256, 00:18:52.532 "data_size": 7936 00:18:52.532 } 00:18:52.532 ] 00:18:52.532 }' 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.532 18:16:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.936 "name": "raid_bdev1", 00:18:53.936 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:53.936 "strip_size_kb": 0, 00:18:53.936 "state": "online", 00:18:53.936 "raid_level": "raid1", 00:18:53.936 "superblock": true, 00:18:53.936 "num_base_bdevs": 2, 00:18:53.936 "num_base_bdevs_discovered": 2, 00:18:53.936 "num_base_bdevs_operational": 2, 00:18:53.936 "process": { 00:18:53.936 "type": "rebuild", 00:18:53.936 "target": "spare", 00:18:53.936 "progress": { 00:18:53.936 "blocks": 5888, 00:18:53.936 "percent": 74 00:18:53.936 } 00:18:53.936 }, 00:18:53.936 "base_bdevs_list": [ 00:18:53.936 { 00:18:53.936 "name": "spare", 00:18:53.936 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:53.936 "is_configured": true, 00:18:53.936 "data_offset": 256, 00:18:53.936 "data_size": 7936 00:18:53.936 }, 00:18:53.936 { 00:18:53.936 "name": "BaseBdev2", 00:18:53.936 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:53.936 "is_configured": true, 00:18:53.936 "data_offset": 256, 00:18:53.936 "data_size": 7936 00:18:53.936 } 00:18:53.936 ] 00:18:53.936 }' 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.936 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.937 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.937 18:16:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.506 [2024-12-06 18:16:06.479819] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:54.506 [2024-12-06 18:16:06.479964] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:54.506 [2024-12-06 18:16:06.480100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.767 "name": "raid_bdev1", 00:18:54.767 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:54.767 "strip_size_kb": 0, 00:18:54.767 "state": "online", 00:18:54.767 "raid_level": "raid1", 00:18:54.767 "superblock": true, 00:18:54.767 "num_base_bdevs": 2, 00:18:54.767 "num_base_bdevs_discovered": 2, 00:18:54.767 "num_base_bdevs_operational": 2, 00:18:54.767 "base_bdevs_list": [ 00:18:54.767 { 00:18:54.767 "name": "spare", 00:18:54.767 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:54.767 "is_configured": true, 00:18:54.767 "data_offset": 256, 00:18:54.767 "data_size": 7936 00:18:54.767 }, 00:18:54.767 { 00:18:54.767 "name": "BaseBdev2", 00:18:54.767 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:54.767 "is_configured": true, 00:18:54.767 "data_offset": 256, 00:18:54.767 "data_size": 7936 00:18:54.767 } 00:18:54.767 ] 00:18:54.767 }' 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.767 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.027 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.027 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.027 "name": "raid_bdev1", 00:18:55.027 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:55.027 "strip_size_kb": 0, 00:18:55.027 "state": "online", 00:18:55.027 "raid_level": "raid1", 00:18:55.027 "superblock": true, 00:18:55.027 "num_base_bdevs": 2, 00:18:55.027 "num_base_bdevs_discovered": 2, 00:18:55.027 "num_base_bdevs_operational": 2, 00:18:55.027 "base_bdevs_list": [ 00:18:55.027 { 00:18:55.027 "name": "spare", 00:18:55.027 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:55.027 "is_configured": true, 00:18:55.027 "data_offset": 256, 00:18:55.027 "data_size": 7936 00:18:55.027 }, 00:18:55.027 { 00:18:55.027 "name": "BaseBdev2", 00:18:55.027 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:55.027 "is_configured": true, 00:18:55.027 "data_offset": 256, 00:18:55.027 "data_size": 7936 00:18:55.027 } 00:18:55.027 ] 00:18:55.027 }' 00:18:55.027 18:16:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.027 "name": "raid_bdev1", 00:18:55.027 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:55.027 "strip_size_kb": 0, 00:18:55.027 "state": "online", 00:18:55.027 "raid_level": "raid1", 00:18:55.027 "superblock": true, 00:18:55.027 "num_base_bdevs": 2, 00:18:55.027 "num_base_bdevs_discovered": 2, 00:18:55.027 "num_base_bdevs_operational": 2, 00:18:55.027 "base_bdevs_list": [ 00:18:55.027 { 00:18:55.027 "name": "spare", 00:18:55.027 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:55.027 "is_configured": true, 00:18:55.027 "data_offset": 256, 00:18:55.027 "data_size": 7936 00:18:55.027 }, 00:18:55.027 { 00:18:55.027 "name": "BaseBdev2", 00:18:55.027 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:55.027 "is_configured": true, 00:18:55.027 "data_offset": 256, 00:18:55.027 "data_size": 7936 00:18:55.027 } 00:18:55.027 ] 00:18:55.027 }' 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.027 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.598 [2024-12-06 18:16:07.491404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.598 [2024-12-06 18:16:07.491438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.598 [2024-12-06 18:16:07.491525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.598 [2024-12-06 18:16:07.491602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.598 [2024-12-06 18:16:07.491613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:55.598 /dev/nbd0 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:55.598 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.858 1+0 records in 00:18:55.858 1+0 records out 00:18:55.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237391 s, 17.3 MB/s 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:55.858 /dev/nbd1 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.858 1+0 records in 00:18:55.858 1+0 records out 00:18:55.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028423 s, 14.4 MB/s 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.858 18:16:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:56.118 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:56.118 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:56.119 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.119 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:56.119 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:56.119 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.119 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.379 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.639 [2024-12-06 18:16:08.628030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:56.639 [2024-12-06 18:16:08.628157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.639 [2024-12-06 18:16:08.628189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:56.639 [2024-12-06 18:16:08.628198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.639 [2024-12-06 18:16:08.630212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.639 [2024-12-06 18:16:08.630248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:56.639 [2024-12-06 18:16:08.630316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:56.639 [2024-12-06 18:16:08.630370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.639 [2024-12-06 18:16:08.630513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:56.639 spare 00:18:56.639 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.640 [2024-12-06 18:16:08.730400] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:56.640 [2024-12-06 18:16:08.730428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:56.640 [2024-12-06 18:16:08.730526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:56.640 [2024-12-06 18:16:08.730661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:56.640 [2024-12-06 18:16:08.730671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:56.640 [2024-12-06 18:16:08.730786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.640 "name": "raid_bdev1", 00:18:56.640 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:56.640 "strip_size_kb": 0, 00:18:56.640 "state": "online", 00:18:56.640 "raid_level": "raid1", 00:18:56.640 "superblock": true, 00:18:56.640 "num_base_bdevs": 2, 00:18:56.640 "num_base_bdevs_discovered": 2, 00:18:56.640 "num_base_bdevs_operational": 2, 00:18:56.640 "base_bdevs_list": [ 00:18:56.640 { 00:18:56.640 "name": "spare", 00:18:56.640 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:56.640 "is_configured": true, 00:18:56.640 "data_offset": 256, 00:18:56.640 "data_size": 7936 00:18:56.640 }, 00:18:56.640 { 00:18:56.640 "name": "BaseBdev2", 00:18:56.640 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:56.640 "is_configured": true, 00:18:56.640 "data_offset": 256, 00:18:56.640 "data_size": 7936 00:18:56.640 } 00:18:56.640 ] 00:18:56.640 }' 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.640 18:16:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.209 "name": "raid_bdev1", 00:18:57.209 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:57.209 "strip_size_kb": 0, 00:18:57.209 "state": "online", 00:18:57.209 "raid_level": "raid1", 00:18:57.209 "superblock": true, 00:18:57.209 "num_base_bdevs": 2, 00:18:57.209 "num_base_bdevs_discovered": 2, 00:18:57.209 "num_base_bdevs_operational": 2, 00:18:57.209 "base_bdevs_list": [ 00:18:57.209 { 00:18:57.209 "name": "spare", 00:18:57.209 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:57.209 "is_configured": true, 00:18:57.209 "data_offset": 256, 00:18:57.209 "data_size": 7936 00:18:57.209 }, 00:18:57.209 { 00:18:57.209 "name": "BaseBdev2", 00:18:57.209 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:57.209 "is_configured": true, 00:18:57.209 "data_offset": 256, 00:18:57.209 "data_size": 7936 00:18:57.209 } 00:18:57.209 ] 00:18:57.209 }' 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.209 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.468 [2024-12-06 18:16:09.379016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.468 "name": "raid_bdev1", 00:18:57.468 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:57.468 "strip_size_kb": 0, 00:18:57.468 "state": "online", 00:18:57.468 "raid_level": "raid1", 00:18:57.468 "superblock": true, 00:18:57.468 "num_base_bdevs": 2, 00:18:57.468 "num_base_bdevs_discovered": 1, 00:18:57.468 "num_base_bdevs_operational": 1, 00:18:57.468 "base_bdevs_list": [ 00:18:57.468 { 00:18:57.468 "name": null, 00:18:57.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.468 "is_configured": false, 00:18:57.468 "data_offset": 0, 00:18:57.468 "data_size": 7936 00:18:57.468 }, 00:18:57.468 { 00:18:57.468 "name": "BaseBdev2", 00:18:57.468 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:57.468 "is_configured": true, 00:18:57.468 "data_offset": 256, 00:18:57.468 "data_size": 7936 00:18:57.468 } 00:18:57.468 ] 00:18:57.468 }' 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.468 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.728 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:57.728 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.728 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.728 [2024-12-06 18:16:09.826256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.728 [2024-12-06 18:16:09.826507] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:57.728 [2024-12-06 18:16:09.826570] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:57.728 [2024-12-06 18:16:09.826658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.728 [2024-12-06 18:16:09.840439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:57.728 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.728 18:16:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:57.728 [2024-12-06 18:16:09.842283] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.105 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.105 "name": "raid_bdev1", 00:18:59.105 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:59.105 "strip_size_kb": 0, 00:18:59.105 "state": "online", 00:18:59.105 "raid_level": "raid1", 00:18:59.105 "superblock": true, 00:18:59.105 "num_base_bdevs": 2, 00:18:59.105 "num_base_bdevs_discovered": 2, 00:18:59.105 "num_base_bdevs_operational": 2, 00:18:59.105 "process": { 00:18:59.105 "type": "rebuild", 00:18:59.105 "target": "spare", 00:18:59.105 "progress": { 00:18:59.105 "blocks": 2560, 00:18:59.105 "percent": 32 00:18:59.105 } 00:18:59.105 }, 00:18:59.105 "base_bdevs_list": [ 00:18:59.105 { 00:18:59.105 "name": "spare", 00:18:59.105 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:18:59.105 "is_configured": true, 00:18:59.105 "data_offset": 256, 00:18:59.105 "data_size": 7936 00:18:59.105 }, 00:18:59.105 { 00:18:59.105 "name": "BaseBdev2", 00:18:59.105 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:59.105 "is_configured": true, 00:18:59.105 "data_offset": 256, 00:18:59.106 "data_size": 7936 00:18:59.106 } 00:18:59.106 ] 00:18:59.106 }' 00:18:59.106 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.106 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.106 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.106 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.106 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:59.106 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.106 18:16:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.106 [2024-12-06 18:16:10.990095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.106 [2024-12-06 18:16:11.047595] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:59.106 [2024-12-06 18:16:11.047654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.106 [2024-12-06 18:16:11.047669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.106 [2024-12-06 18:16:11.047690] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.106 "name": "raid_bdev1", 00:18:59.106 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:18:59.106 "strip_size_kb": 0, 00:18:59.106 "state": "online", 00:18:59.106 "raid_level": "raid1", 00:18:59.106 "superblock": true, 00:18:59.106 "num_base_bdevs": 2, 00:18:59.106 "num_base_bdevs_discovered": 1, 00:18:59.106 "num_base_bdevs_operational": 1, 00:18:59.106 "base_bdevs_list": [ 00:18:59.106 { 00:18:59.106 "name": null, 00:18:59.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.106 "is_configured": false, 00:18:59.106 "data_offset": 0, 00:18:59.106 "data_size": 7936 00:18:59.106 }, 00:18:59.106 { 00:18:59.106 "name": "BaseBdev2", 00:18:59.106 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:18:59.106 "is_configured": true, 00:18:59.106 "data_offset": 256, 00:18:59.106 "data_size": 7936 00:18:59.106 } 00:18:59.106 ] 00:18:59.106 }' 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.106 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.365 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:59.365 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.365 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.365 [2024-12-06 18:16:11.475241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:59.365 [2024-12-06 18:16:11.475355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.365 [2024-12-06 18:16:11.475400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:59.365 [2024-12-06 18:16:11.475430] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.365 [2024-12-06 18:16:11.475746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.365 [2024-12-06 18:16:11.475804] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:59.365 [2024-12-06 18:16:11.475899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:59.365 [2024-12-06 18:16:11.475940] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:59.365 [2024-12-06 18:16:11.475982] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:59.365 [2024-12-06 18:16:11.476076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:59.366 [2024-12-06 18:16:11.489615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:59.366 spare 00:18:59.366 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.366 18:16:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:59.366 [2024-12-06 18:16:11.491441] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:00.746 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.747 "name": "raid_bdev1", 00:19:00.747 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:19:00.747 "strip_size_kb": 0, 00:19:00.747 "state": "online", 00:19:00.747 "raid_level": "raid1", 00:19:00.747 "superblock": true, 00:19:00.747 "num_base_bdevs": 2, 00:19:00.747 "num_base_bdevs_discovered": 2, 00:19:00.747 "num_base_bdevs_operational": 2, 00:19:00.747 "process": { 00:19:00.747 "type": "rebuild", 00:19:00.747 "target": "spare", 00:19:00.747 "progress": { 00:19:00.747 "blocks": 2560, 00:19:00.747 "percent": 32 00:19:00.747 } 00:19:00.747 }, 00:19:00.747 "base_bdevs_list": [ 00:19:00.747 { 00:19:00.747 "name": "spare", 00:19:00.747 "uuid": "3f76d4c7-d7b6-5d6a-b069-fe570b78da25", 00:19:00.747 "is_configured": true, 00:19:00.747 "data_offset": 256, 00:19:00.747 "data_size": 7936 00:19:00.747 }, 00:19:00.747 { 00:19:00.747 "name": "BaseBdev2", 00:19:00.747 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:19:00.747 "is_configured": true, 00:19:00.747 "data_offset": 256, 00:19:00.747 "data_size": 7936 00:19:00.747 } 00:19:00.747 ] 00:19:00.747 }' 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.747 [2024-12-06 18:16:12.635782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:00.747 [2024-12-06 18:16:12.696653] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:00.747 [2024-12-06 18:16:12.696709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.747 [2024-12-06 18:16:12.696755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:00.747 [2024-12-06 18:16:12.696762] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.747 "name": "raid_bdev1", 00:19:00.747 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:19:00.747 "strip_size_kb": 0, 00:19:00.747 "state": "online", 00:19:00.747 "raid_level": "raid1", 00:19:00.747 "superblock": true, 00:19:00.747 "num_base_bdevs": 2, 00:19:00.747 "num_base_bdevs_discovered": 1, 00:19:00.747 "num_base_bdevs_operational": 1, 00:19:00.747 "base_bdevs_list": [ 00:19:00.747 { 00:19:00.747 "name": null, 00:19:00.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.747 "is_configured": false, 00:19:00.747 "data_offset": 0, 00:19:00.747 "data_size": 7936 00:19:00.747 }, 00:19:00.747 { 00:19:00.747 "name": "BaseBdev2", 00:19:00.747 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:19:00.747 "is_configured": true, 00:19:00.747 "data_offset": 256, 00:19:00.747 "data_size": 7936 00:19:00.747 } 00:19:00.747 ] 00:19:00.747 }' 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.747 18:16:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.007 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.266 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.266 "name": "raid_bdev1", 00:19:01.266 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:19:01.266 "strip_size_kb": 0, 00:19:01.266 "state": "online", 00:19:01.266 "raid_level": "raid1", 00:19:01.266 "superblock": true, 00:19:01.266 "num_base_bdevs": 2, 00:19:01.266 "num_base_bdevs_discovered": 1, 00:19:01.266 "num_base_bdevs_operational": 1, 00:19:01.266 "base_bdevs_list": [ 00:19:01.266 { 00:19:01.266 "name": null, 00:19:01.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.267 "is_configured": false, 00:19:01.267 "data_offset": 0, 00:19:01.267 "data_size": 7936 00:19:01.267 }, 00:19:01.267 { 00:19:01.267 "name": "BaseBdev2", 00:19:01.267 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:19:01.267 "is_configured": true, 00:19:01.267 "data_offset": 256, 00:19:01.267 "data_size": 7936 00:19:01.267 } 00:19:01.267 ] 00:19:01.267 }' 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.267 [2024-12-06 18:16:13.303526] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:01.267 [2024-12-06 18:16:13.303588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.267 [2024-12-06 18:16:13.303626] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:01.267 [2024-12-06 18:16:13.303634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.267 [2024-12-06 18:16:13.303861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.267 [2024-12-06 18:16:13.303873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:01.267 [2024-12-06 18:16:13.303923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:01.267 [2024-12-06 18:16:13.303937] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:01.267 [2024-12-06 18:16:13.303946] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:01.267 [2024-12-06 18:16:13.303956] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:01.267 BaseBdev1 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.267 18:16:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.263 "name": "raid_bdev1", 00:19:02.263 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:19:02.263 "strip_size_kb": 0, 00:19:02.263 "state": "online", 00:19:02.263 "raid_level": "raid1", 00:19:02.263 "superblock": true, 00:19:02.263 "num_base_bdevs": 2, 00:19:02.263 "num_base_bdevs_discovered": 1, 00:19:02.263 "num_base_bdevs_operational": 1, 00:19:02.263 "base_bdevs_list": [ 00:19:02.263 { 00:19:02.263 "name": null, 00:19:02.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.263 "is_configured": false, 00:19:02.263 "data_offset": 0, 00:19:02.263 "data_size": 7936 00:19:02.263 }, 00:19:02.263 { 00:19:02.263 "name": "BaseBdev2", 00:19:02.263 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:19:02.263 "is_configured": true, 00:19:02.263 "data_offset": 256, 00:19:02.263 "data_size": 7936 00:19:02.263 } 00:19:02.263 ] 00:19:02.263 }' 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.263 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.831 "name": "raid_bdev1", 00:19:02.831 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:19:02.831 "strip_size_kb": 0, 00:19:02.831 "state": "online", 00:19:02.831 "raid_level": "raid1", 00:19:02.831 "superblock": true, 00:19:02.831 "num_base_bdevs": 2, 00:19:02.831 "num_base_bdevs_discovered": 1, 00:19:02.831 "num_base_bdevs_operational": 1, 00:19:02.831 "base_bdevs_list": [ 00:19:02.831 { 00:19:02.831 "name": null, 00:19:02.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.831 "is_configured": false, 00:19:02.831 "data_offset": 0, 00:19:02.831 "data_size": 7936 00:19:02.831 }, 00:19:02.831 { 00:19:02.831 "name": "BaseBdev2", 00:19:02.831 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:19:02.831 "is_configured": true, 00:19:02.831 "data_offset": 256, 00:19:02.831 "data_size": 7936 00:19:02.831 } 00:19:02.831 ] 00:19:02.831 }' 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.831 [2024-12-06 18:16:14.900883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.831 [2024-12-06 18:16:14.901085] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:02.831 [2024-12-06 18:16:14.901101] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:02.831 request: 00:19:02.831 { 00:19:02.831 "base_bdev": "BaseBdev1", 00:19:02.831 "raid_bdev": "raid_bdev1", 00:19:02.831 "method": "bdev_raid_add_base_bdev", 00:19:02.831 "req_id": 1 00:19:02.831 } 00:19:02.831 Got JSON-RPC error response 00:19:02.831 response: 00:19:02.831 { 00:19:02.831 "code": -22, 00:19:02.831 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:02.831 } 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.831 18:16:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:03.786 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.786 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.786 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.786 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.786 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.786 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.786 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.786 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.786 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.787 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.787 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.787 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.787 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.787 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.787 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.046 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.046 "name": "raid_bdev1", 00:19:04.046 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:19:04.046 "strip_size_kb": 0, 00:19:04.046 "state": "online", 00:19:04.046 "raid_level": "raid1", 00:19:04.046 "superblock": true, 00:19:04.046 "num_base_bdevs": 2, 00:19:04.046 "num_base_bdevs_discovered": 1, 00:19:04.046 "num_base_bdevs_operational": 1, 00:19:04.046 "base_bdevs_list": [ 00:19:04.046 { 00:19:04.046 "name": null, 00:19:04.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.046 "is_configured": false, 00:19:04.046 "data_offset": 0, 00:19:04.046 "data_size": 7936 00:19:04.046 }, 00:19:04.046 { 00:19:04.046 "name": "BaseBdev2", 00:19:04.046 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:19:04.046 "is_configured": true, 00:19:04.046 "data_offset": 256, 00:19:04.046 "data_size": 7936 00:19:04.046 } 00:19:04.046 ] 00:19:04.046 }' 00:19:04.046 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.046 18:16:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.306 "name": "raid_bdev1", 00:19:04.306 "uuid": "9b6f89ff-bf87-4e20-a907-6261ae42ef09", 00:19:04.306 "strip_size_kb": 0, 00:19:04.306 "state": "online", 00:19:04.306 "raid_level": "raid1", 00:19:04.306 "superblock": true, 00:19:04.306 "num_base_bdevs": 2, 00:19:04.306 "num_base_bdevs_discovered": 1, 00:19:04.306 "num_base_bdevs_operational": 1, 00:19:04.306 "base_bdevs_list": [ 00:19:04.306 { 00:19:04.306 "name": null, 00:19:04.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.306 "is_configured": false, 00:19:04.306 "data_offset": 0, 00:19:04.306 "data_size": 7936 00:19:04.306 }, 00:19:04.306 { 00:19:04.306 "name": "BaseBdev2", 00:19:04.306 "uuid": "2abf98ab-4805-53bc-8ec5-90e3bb4a63a7", 00:19:04.306 "is_configured": true, 00:19:04.306 "data_offset": 256, 00:19:04.306 "data_size": 7936 00:19:04.306 } 00:19:04.306 ] 00:19:04.306 }' 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88360 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88360 ']' 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88360 00:19:04.306 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:04.566 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.566 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88360 00:19:04.566 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.566 killing process with pid 88360 00:19:04.566 Received shutdown signal, test time was about 60.000000 seconds 00:19:04.566 00:19:04.566 Latency(us) 00:19:04.566 [2024-12-06T18:16:16.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.566 [2024-12-06T18:16:16.734Z] =================================================================================================================== 00:19:04.566 [2024-12-06T18:16:16.734Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:04.566 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.566 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88360' 00:19:04.566 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88360 00:19:04.566 [2024-12-06 18:16:16.509344] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:04.566 [2024-12-06 18:16:16.509471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.566 18:16:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88360 00:19:04.566 [2024-12-06 18:16:16.509525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.566 [2024-12-06 18:16:16.509537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:04.835 [2024-12-06 18:16:16.819266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.772 18:16:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:05.772 00:19:05.772 real 0m19.531s 00:19:05.772 user 0m25.408s 00:19:05.772 sys 0m2.526s 00:19:05.772 18:16:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.772 ************************************ 00:19:05.772 END TEST raid_rebuild_test_sb_md_separate 00:19:05.772 ************************************ 00:19:05.772 18:16:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.031 18:16:17 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:06.031 18:16:17 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:06.031 18:16:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:06.031 18:16:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.031 18:16:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.031 ************************************ 00:19:06.031 START TEST raid_state_function_test_sb_md_interleaved 00:19:06.031 ************************************ 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:06.031 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:06.032 Process raid pid: 89050 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89050 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89050' 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89050 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89050 ']' 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.032 18:16:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.032 [2024-12-06 18:16:18.060813] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:19:06.032 [2024-12-06 18:16:18.060999] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.291 [2024-12-06 18:16:18.237091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.291 [2024-12-06 18:16:18.349137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.551 [2024-12-06 18:16:18.544677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.551 [2024-12-06 18:16:18.544711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.811 [2024-12-06 18:16:18.886654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:06.811 [2024-12-06 18:16:18.886711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:06.811 [2024-12-06 18:16:18.886722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.811 [2024-12-06 18:16:18.886730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.811 "name": "Existed_Raid", 00:19:06.811 "uuid": "a9263b41-a56c-494a-b534-ce0762bcc826", 00:19:06.811 "strip_size_kb": 0, 00:19:06.811 "state": "configuring", 00:19:06.811 "raid_level": "raid1", 00:19:06.811 "superblock": true, 00:19:06.811 "num_base_bdevs": 2, 00:19:06.811 "num_base_bdevs_discovered": 0, 00:19:06.811 "num_base_bdevs_operational": 2, 00:19:06.811 "base_bdevs_list": [ 00:19:06.811 { 00:19:06.811 "name": "BaseBdev1", 00:19:06.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.811 "is_configured": false, 00:19:06.811 "data_offset": 0, 00:19:06.811 "data_size": 0 00:19:06.811 }, 00:19:06.811 { 00:19:06.811 "name": "BaseBdev2", 00:19:06.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.811 "is_configured": false, 00:19:06.811 "data_offset": 0, 00:19:06.811 "data_size": 0 00:19:06.811 } 00:19:06.811 ] 00:19:06.811 }' 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.811 18:16:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.380 [2024-12-06 18:16:19.361795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:07.380 [2024-12-06 18:16:19.361880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.380 [2024-12-06 18:16:19.373756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:07.380 [2024-12-06 18:16:19.373849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:07.380 [2024-12-06 18:16:19.373877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.380 [2024-12-06 18:16:19.373901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.380 [2024-12-06 18:16:19.420511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.380 BaseBdev1 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.380 [ 00:19:07.380 { 00:19:07.380 "name": "BaseBdev1", 00:19:07.380 "aliases": [ 00:19:07.380 "af286fb4-9eb8-4f21-a6a6-a0aa275834d0" 00:19:07.380 ], 00:19:07.380 "product_name": "Malloc disk", 00:19:07.380 "block_size": 4128, 00:19:07.380 "num_blocks": 8192, 00:19:07.380 "uuid": "af286fb4-9eb8-4f21-a6a6-a0aa275834d0", 00:19:07.380 "md_size": 32, 00:19:07.380 "md_interleave": true, 00:19:07.380 "dif_type": 0, 00:19:07.380 "assigned_rate_limits": { 00:19:07.380 "rw_ios_per_sec": 0, 00:19:07.380 "rw_mbytes_per_sec": 0, 00:19:07.380 "r_mbytes_per_sec": 0, 00:19:07.380 "w_mbytes_per_sec": 0 00:19:07.380 }, 00:19:07.380 "claimed": true, 00:19:07.380 "claim_type": "exclusive_write", 00:19:07.380 "zoned": false, 00:19:07.380 "supported_io_types": { 00:19:07.380 "read": true, 00:19:07.380 "write": true, 00:19:07.380 "unmap": true, 00:19:07.380 "flush": true, 00:19:07.380 "reset": true, 00:19:07.380 "nvme_admin": false, 00:19:07.380 "nvme_io": false, 00:19:07.380 "nvme_io_md": false, 00:19:07.380 "write_zeroes": true, 00:19:07.380 "zcopy": true, 00:19:07.380 "get_zone_info": false, 00:19:07.380 "zone_management": false, 00:19:07.380 "zone_append": false, 00:19:07.380 "compare": false, 00:19:07.380 "compare_and_write": false, 00:19:07.380 "abort": true, 00:19:07.380 "seek_hole": false, 00:19:07.380 "seek_data": false, 00:19:07.380 "copy": true, 00:19:07.380 "nvme_iov_md": false 00:19:07.380 }, 00:19:07.380 "memory_domains": [ 00:19:07.380 { 00:19:07.380 "dma_device_id": "system", 00:19:07.380 "dma_device_type": 1 00:19:07.380 }, 00:19:07.380 { 00:19:07.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.380 "dma_device_type": 2 00:19:07.380 } 00:19:07.380 ], 00:19:07.380 "driver_specific": {} 00:19:07.380 } 00:19:07.380 ] 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.380 "name": "Existed_Raid", 00:19:07.380 "uuid": "e7f68373-d11d-4d58-be9b-bad831573f2d", 00:19:07.380 "strip_size_kb": 0, 00:19:07.380 "state": "configuring", 00:19:07.380 "raid_level": "raid1", 00:19:07.380 "superblock": true, 00:19:07.380 "num_base_bdevs": 2, 00:19:07.380 "num_base_bdevs_discovered": 1, 00:19:07.380 "num_base_bdevs_operational": 2, 00:19:07.380 "base_bdevs_list": [ 00:19:07.380 { 00:19:07.380 "name": "BaseBdev1", 00:19:07.380 "uuid": "af286fb4-9eb8-4f21-a6a6-a0aa275834d0", 00:19:07.380 "is_configured": true, 00:19:07.380 "data_offset": 256, 00:19:07.380 "data_size": 7936 00:19:07.380 }, 00:19:07.380 { 00:19:07.380 "name": "BaseBdev2", 00:19:07.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.380 "is_configured": false, 00:19:07.380 "data_offset": 0, 00:19:07.380 "data_size": 0 00:19:07.380 } 00:19:07.380 ] 00:19:07.380 }' 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.380 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.950 [2024-12-06 18:16:19.915751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:07.950 [2024-12-06 18:16:19.915848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.950 [2024-12-06 18:16:19.923793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.950 [2024-12-06 18:16:19.925588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.950 [2024-12-06 18:16:19.925627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.950 "name": "Existed_Raid", 00:19:07.950 "uuid": "da4c35ea-2e0d-46f3-b5e5-2106a1416413", 00:19:07.950 "strip_size_kb": 0, 00:19:07.950 "state": "configuring", 00:19:07.950 "raid_level": "raid1", 00:19:07.950 "superblock": true, 00:19:07.950 "num_base_bdevs": 2, 00:19:07.950 "num_base_bdevs_discovered": 1, 00:19:07.950 "num_base_bdevs_operational": 2, 00:19:07.950 "base_bdevs_list": [ 00:19:07.950 { 00:19:07.950 "name": "BaseBdev1", 00:19:07.950 "uuid": "af286fb4-9eb8-4f21-a6a6-a0aa275834d0", 00:19:07.950 "is_configured": true, 00:19:07.950 "data_offset": 256, 00:19:07.950 "data_size": 7936 00:19:07.950 }, 00:19:07.950 { 00:19:07.950 "name": "BaseBdev2", 00:19:07.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.950 "is_configured": false, 00:19:07.950 "data_offset": 0, 00:19:07.950 "data_size": 0 00:19:07.950 } 00:19:07.950 ] 00:19:07.950 }' 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.950 18:16:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.210 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:08.210 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.210 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.211 [2024-12-06 18:16:20.337586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.211 [2024-12-06 18:16:20.337883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:08.211 [2024-12-06 18:16:20.337934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:08.211 [2024-12-06 18:16:20.338041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:08.211 [2024-12-06 18:16:20.338173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:08.211 [2024-12-06 18:16:20.338215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:08.211 [2024-12-06 18:16:20.338311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.211 BaseBdev2 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.211 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.211 [ 00:19:08.211 { 00:19:08.211 "name": "BaseBdev2", 00:19:08.211 "aliases": [ 00:19:08.211 "b9e1f6f8-62cf-4b36-bb65-c873c746d71a" 00:19:08.211 ], 00:19:08.211 "product_name": "Malloc disk", 00:19:08.211 "block_size": 4128, 00:19:08.211 "num_blocks": 8192, 00:19:08.211 "uuid": "b9e1f6f8-62cf-4b36-bb65-c873c746d71a", 00:19:08.211 "md_size": 32, 00:19:08.211 "md_interleave": true, 00:19:08.211 "dif_type": 0, 00:19:08.211 "assigned_rate_limits": { 00:19:08.211 "rw_ios_per_sec": 0, 00:19:08.211 "rw_mbytes_per_sec": 0, 00:19:08.211 "r_mbytes_per_sec": 0, 00:19:08.211 "w_mbytes_per_sec": 0 00:19:08.211 }, 00:19:08.211 "claimed": true, 00:19:08.211 "claim_type": "exclusive_write", 00:19:08.211 "zoned": false, 00:19:08.211 "supported_io_types": { 00:19:08.211 "read": true, 00:19:08.211 "write": true, 00:19:08.211 "unmap": true, 00:19:08.211 "flush": true, 00:19:08.211 "reset": true, 00:19:08.211 "nvme_admin": false, 00:19:08.211 "nvme_io": false, 00:19:08.211 "nvme_io_md": false, 00:19:08.211 "write_zeroes": true, 00:19:08.211 "zcopy": true, 00:19:08.211 "get_zone_info": false, 00:19:08.211 "zone_management": false, 00:19:08.211 "zone_append": false, 00:19:08.211 "compare": false, 00:19:08.211 "compare_and_write": false, 00:19:08.211 "abort": true, 00:19:08.211 "seek_hole": false, 00:19:08.211 "seek_data": false, 00:19:08.211 "copy": true, 00:19:08.211 "nvme_iov_md": false 00:19:08.211 }, 00:19:08.211 "memory_domains": [ 00:19:08.211 { 00:19:08.471 "dma_device_id": "system", 00:19:08.471 "dma_device_type": 1 00:19:08.471 }, 00:19:08.471 { 00:19:08.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.471 "dma_device_type": 2 00:19:08.471 } 00:19:08.471 ], 00:19:08.471 "driver_specific": {} 00:19:08.471 } 00:19:08.471 ] 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.471 "name": "Existed_Raid", 00:19:08.471 "uuid": "da4c35ea-2e0d-46f3-b5e5-2106a1416413", 00:19:08.471 "strip_size_kb": 0, 00:19:08.471 "state": "online", 00:19:08.471 "raid_level": "raid1", 00:19:08.471 "superblock": true, 00:19:08.471 "num_base_bdevs": 2, 00:19:08.471 "num_base_bdevs_discovered": 2, 00:19:08.471 "num_base_bdevs_operational": 2, 00:19:08.471 "base_bdevs_list": [ 00:19:08.471 { 00:19:08.471 "name": "BaseBdev1", 00:19:08.471 "uuid": "af286fb4-9eb8-4f21-a6a6-a0aa275834d0", 00:19:08.471 "is_configured": true, 00:19:08.471 "data_offset": 256, 00:19:08.471 "data_size": 7936 00:19:08.471 }, 00:19:08.471 { 00:19:08.471 "name": "BaseBdev2", 00:19:08.471 "uuid": "b9e1f6f8-62cf-4b36-bb65-c873c746d71a", 00:19:08.471 "is_configured": true, 00:19:08.471 "data_offset": 256, 00:19:08.471 "data_size": 7936 00:19:08.471 } 00:19:08.471 ] 00:19:08.471 }' 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.471 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.730 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:08.731 [2024-12-06 18:16:20.793224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:08.731 "name": "Existed_Raid", 00:19:08.731 "aliases": [ 00:19:08.731 "da4c35ea-2e0d-46f3-b5e5-2106a1416413" 00:19:08.731 ], 00:19:08.731 "product_name": "Raid Volume", 00:19:08.731 "block_size": 4128, 00:19:08.731 "num_blocks": 7936, 00:19:08.731 "uuid": "da4c35ea-2e0d-46f3-b5e5-2106a1416413", 00:19:08.731 "md_size": 32, 00:19:08.731 "md_interleave": true, 00:19:08.731 "dif_type": 0, 00:19:08.731 "assigned_rate_limits": { 00:19:08.731 "rw_ios_per_sec": 0, 00:19:08.731 "rw_mbytes_per_sec": 0, 00:19:08.731 "r_mbytes_per_sec": 0, 00:19:08.731 "w_mbytes_per_sec": 0 00:19:08.731 }, 00:19:08.731 "claimed": false, 00:19:08.731 "zoned": false, 00:19:08.731 "supported_io_types": { 00:19:08.731 "read": true, 00:19:08.731 "write": true, 00:19:08.731 "unmap": false, 00:19:08.731 "flush": false, 00:19:08.731 "reset": true, 00:19:08.731 "nvme_admin": false, 00:19:08.731 "nvme_io": false, 00:19:08.731 "nvme_io_md": false, 00:19:08.731 "write_zeroes": true, 00:19:08.731 "zcopy": false, 00:19:08.731 "get_zone_info": false, 00:19:08.731 "zone_management": false, 00:19:08.731 "zone_append": false, 00:19:08.731 "compare": false, 00:19:08.731 "compare_and_write": false, 00:19:08.731 "abort": false, 00:19:08.731 "seek_hole": false, 00:19:08.731 "seek_data": false, 00:19:08.731 "copy": false, 00:19:08.731 "nvme_iov_md": false 00:19:08.731 }, 00:19:08.731 "memory_domains": [ 00:19:08.731 { 00:19:08.731 "dma_device_id": "system", 00:19:08.731 "dma_device_type": 1 00:19:08.731 }, 00:19:08.731 { 00:19:08.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.731 "dma_device_type": 2 00:19:08.731 }, 00:19:08.731 { 00:19:08.731 "dma_device_id": "system", 00:19:08.731 "dma_device_type": 1 00:19:08.731 }, 00:19:08.731 { 00:19:08.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.731 "dma_device_type": 2 00:19:08.731 } 00:19:08.731 ], 00:19:08.731 "driver_specific": { 00:19:08.731 "raid": { 00:19:08.731 "uuid": "da4c35ea-2e0d-46f3-b5e5-2106a1416413", 00:19:08.731 "strip_size_kb": 0, 00:19:08.731 "state": "online", 00:19:08.731 "raid_level": "raid1", 00:19:08.731 "superblock": true, 00:19:08.731 "num_base_bdevs": 2, 00:19:08.731 "num_base_bdevs_discovered": 2, 00:19:08.731 "num_base_bdevs_operational": 2, 00:19:08.731 "base_bdevs_list": [ 00:19:08.731 { 00:19:08.731 "name": "BaseBdev1", 00:19:08.731 "uuid": "af286fb4-9eb8-4f21-a6a6-a0aa275834d0", 00:19:08.731 "is_configured": true, 00:19:08.731 "data_offset": 256, 00:19:08.731 "data_size": 7936 00:19:08.731 }, 00:19:08.731 { 00:19:08.731 "name": "BaseBdev2", 00:19:08.731 "uuid": "b9e1f6f8-62cf-4b36-bb65-c873c746d71a", 00:19:08.731 "is_configured": true, 00:19:08.731 "data_offset": 256, 00:19:08.731 "data_size": 7936 00:19:08.731 } 00:19:08.731 ] 00:19:08.731 } 00:19:08.731 } 00:19:08.731 }' 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:08.731 BaseBdev2' 00:19:08.731 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.991 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:08.991 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.991 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:08.991 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.991 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.991 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.991 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.991 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:08.991 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:08.992 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.992 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:08.992 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.992 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.992 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.992 18:16:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.992 [2024-12-06 18:16:21.020533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.992 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.252 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.252 "name": "Existed_Raid", 00:19:09.252 "uuid": "da4c35ea-2e0d-46f3-b5e5-2106a1416413", 00:19:09.252 "strip_size_kb": 0, 00:19:09.252 "state": "online", 00:19:09.252 "raid_level": "raid1", 00:19:09.252 "superblock": true, 00:19:09.252 "num_base_bdevs": 2, 00:19:09.252 "num_base_bdevs_discovered": 1, 00:19:09.252 "num_base_bdevs_operational": 1, 00:19:09.252 "base_bdevs_list": [ 00:19:09.252 { 00:19:09.252 "name": null, 00:19:09.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.252 "is_configured": false, 00:19:09.252 "data_offset": 0, 00:19:09.252 "data_size": 7936 00:19:09.252 }, 00:19:09.252 { 00:19:09.252 "name": "BaseBdev2", 00:19:09.252 "uuid": "b9e1f6f8-62cf-4b36-bb65-c873c746d71a", 00:19:09.252 "is_configured": true, 00:19:09.252 "data_offset": 256, 00:19:09.252 "data_size": 7936 00:19:09.252 } 00:19:09.252 ] 00:19:09.252 }' 00:19:09.252 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.252 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.512 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.512 [2024-12-06 18:16:21.627198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:09.512 [2024-12-06 18:16:21.627379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.772 [2024-12-06 18:16:21.723400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.772 [2024-12-06 18:16:21.723451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.772 [2024-12-06 18:16:21.723463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89050 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89050 ']' 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89050 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89050 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89050' 00:19:09.772 killing process with pid 89050 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89050 00:19:09.772 [2024-12-06 18:16:21.808756] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.772 18:16:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89050 00:19:09.772 [2024-12-06 18:16:21.824979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.155 18:16:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:11.155 00:19:11.155 real 0m4.942s 00:19:11.155 user 0m7.116s 00:19:11.155 sys 0m0.803s 00:19:11.155 ************************************ 00:19:11.155 END TEST raid_state_function_test_sb_md_interleaved 00:19:11.155 ************************************ 00:19:11.155 18:16:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.155 18:16:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.155 18:16:22 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:11.155 18:16:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:11.155 18:16:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.155 18:16:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.155 ************************************ 00:19:11.155 START TEST raid_superblock_test_md_interleaved 00:19:11.155 ************************************ 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89301 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89301 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89301 ']' 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.155 18:16:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.155 [2024-12-06 18:16:23.064477] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:19:11.155 [2024-12-06 18:16:23.064972] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89301 ] 00:19:11.155 [2024-12-06 18:16:23.217666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.415 [2024-12-06 18:16:23.325133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.415 [2024-12-06 18:16:23.515918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.415 [2024-12-06 18:16:23.516057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:11.984 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.985 malloc1 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.985 [2024-12-06 18:16:23.966271] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:11.985 [2024-12-06 18:16:23.966386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.985 [2024-12-06 18:16:23.966428] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:11.985 [2024-12-06 18:16:23.966456] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.985 [2024-12-06 18:16:23.968403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.985 [2024-12-06 18:16:23.968474] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:11.985 pt1 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.985 18:16:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.985 malloc2 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.985 [2024-12-06 18:16:24.024224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:11.985 [2024-12-06 18:16:24.024320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.985 [2024-12-06 18:16:24.024360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:11.985 [2024-12-06 18:16:24.024387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.985 [2024-12-06 18:16:24.026467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.985 [2024-12-06 18:16:24.026559] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:11.985 pt2 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.985 [2024-12-06 18:16:24.036237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:11.985 [2024-12-06 18:16:24.037945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:11.985 [2024-12-06 18:16:24.038133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:11.985 [2024-12-06 18:16:24.038147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:11.985 [2024-12-06 18:16:24.038230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:11.985 [2024-12-06 18:16:24.038294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:11.985 [2024-12-06 18:16:24.038305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:11.985 [2024-12-06 18:16:24.038369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.985 "name": "raid_bdev1", 00:19:11.985 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:11.985 "strip_size_kb": 0, 00:19:11.985 "state": "online", 00:19:11.985 "raid_level": "raid1", 00:19:11.985 "superblock": true, 00:19:11.985 "num_base_bdevs": 2, 00:19:11.985 "num_base_bdevs_discovered": 2, 00:19:11.985 "num_base_bdevs_operational": 2, 00:19:11.985 "base_bdevs_list": [ 00:19:11.985 { 00:19:11.985 "name": "pt1", 00:19:11.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.985 "is_configured": true, 00:19:11.985 "data_offset": 256, 00:19:11.985 "data_size": 7936 00:19:11.985 }, 00:19:11.985 { 00:19:11.985 "name": "pt2", 00:19:11.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.985 "is_configured": true, 00:19:11.985 "data_offset": 256, 00:19:11.985 "data_size": 7936 00:19:11.985 } 00:19:11.985 ] 00:19:11.985 }' 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.985 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.555 [2024-12-06 18:16:24.455855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.555 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:12.555 "name": "raid_bdev1", 00:19:12.555 "aliases": [ 00:19:12.555 "59046d82-2d9d-45d5-b835-c857db4d9b6f" 00:19:12.555 ], 00:19:12.555 "product_name": "Raid Volume", 00:19:12.555 "block_size": 4128, 00:19:12.555 "num_blocks": 7936, 00:19:12.555 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:12.555 "md_size": 32, 00:19:12.555 "md_interleave": true, 00:19:12.555 "dif_type": 0, 00:19:12.555 "assigned_rate_limits": { 00:19:12.555 "rw_ios_per_sec": 0, 00:19:12.555 "rw_mbytes_per_sec": 0, 00:19:12.555 "r_mbytes_per_sec": 0, 00:19:12.555 "w_mbytes_per_sec": 0 00:19:12.555 }, 00:19:12.555 "claimed": false, 00:19:12.555 "zoned": false, 00:19:12.555 "supported_io_types": { 00:19:12.555 "read": true, 00:19:12.555 "write": true, 00:19:12.555 "unmap": false, 00:19:12.555 "flush": false, 00:19:12.555 "reset": true, 00:19:12.555 "nvme_admin": false, 00:19:12.555 "nvme_io": false, 00:19:12.555 "nvme_io_md": false, 00:19:12.555 "write_zeroes": true, 00:19:12.555 "zcopy": false, 00:19:12.555 "get_zone_info": false, 00:19:12.555 "zone_management": false, 00:19:12.555 "zone_append": false, 00:19:12.555 "compare": false, 00:19:12.555 "compare_and_write": false, 00:19:12.555 "abort": false, 00:19:12.555 "seek_hole": false, 00:19:12.555 "seek_data": false, 00:19:12.555 "copy": false, 00:19:12.555 "nvme_iov_md": false 00:19:12.555 }, 00:19:12.555 "memory_domains": [ 00:19:12.555 { 00:19:12.555 "dma_device_id": "system", 00:19:12.555 "dma_device_type": 1 00:19:12.555 }, 00:19:12.555 { 00:19:12.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.555 "dma_device_type": 2 00:19:12.555 }, 00:19:12.555 { 00:19:12.555 "dma_device_id": "system", 00:19:12.555 "dma_device_type": 1 00:19:12.555 }, 00:19:12.555 { 00:19:12.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.555 "dma_device_type": 2 00:19:12.555 } 00:19:12.555 ], 00:19:12.555 "driver_specific": { 00:19:12.555 "raid": { 00:19:12.555 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:12.555 "strip_size_kb": 0, 00:19:12.555 "state": "online", 00:19:12.555 "raid_level": "raid1", 00:19:12.555 "superblock": true, 00:19:12.555 "num_base_bdevs": 2, 00:19:12.555 "num_base_bdevs_discovered": 2, 00:19:12.555 "num_base_bdevs_operational": 2, 00:19:12.555 "base_bdevs_list": [ 00:19:12.555 { 00:19:12.555 "name": "pt1", 00:19:12.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:12.555 "is_configured": true, 00:19:12.555 "data_offset": 256, 00:19:12.555 "data_size": 7936 00:19:12.555 }, 00:19:12.555 { 00:19:12.555 "name": "pt2", 00:19:12.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.555 "is_configured": true, 00:19:12.555 "data_offset": 256, 00:19:12.556 "data_size": 7936 00:19:12.556 } 00:19:12.556 ] 00:19:12.556 } 00:19:12.556 } 00:19:12.556 }' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:12.556 pt2' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.556 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.556 [2024-12-06 18:16:24.703384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=59046d82-2d9d-45d5-b835-c857db4d9b6f 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 59046d82-2d9d-45d5-b835-c857db4d9b6f ']' 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.816 [2024-12-06 18:16:24.751011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.816 [2024-12-06 18:16:24.751085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.816 [2024-12-06 18:16:24.751192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.816 [2024-12-06 18:16:24.751270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.816 [2024-12-06 18:16:24.751316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:12.816 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.817 [2024-12-06 18:16:24.886802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:12.817 [2024-12-06 18:16:24.888748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:12.817 [2024-12-06 18:16:24.888896] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:12.817 [2024-12-06 18:16:24.888957] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:12.817 [2024-12-06 18:16:24.888972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.817 [2024-12-06 18:16:24.888983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:12.817 request: 00:19:12.817 { 00:19:12.817 "name": "raid_bdev1", 00:19:12.817 "raid_level": "raid1", 00:19:12.817 "base_bdevs": [ 00:19:12.817 "malloc1", 00:19:12.817 "malloc2" 00:19:12.817 ], 00:19:12.817 "superblock": false, 00:19:12.817 "method": "bdev_raid_create", 00:19:12.817 "req_id": 1 00:19:12.817 } 00:19:12.817 Got JSON-RPC error response 00:19:12.817 response: 00:19:12.817 { 00:19:12.817 "code": -17, 00:19:12.817 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:12.817 } 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.817 [2024-12-06 18:16:24.938688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:12.817 [2024-12-06 18:16:24.938781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.817 [2024-12-06 18:16:24.938844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:12.817 [2024-12-06 18:16:24.938881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.817 [2024-12-06 18:16:24.940979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.817 [2024-12-06 18:16:24.941049] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:12.817 [2024-12-06 18:16:24.941150] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:12.817 [2024-12-06 18:16:24.941233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:12.817 pt1 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.817 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.076 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.076 "name": "raid_bdev1", 00:19:13.076 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:13.076 "strip_size_kb": 0, 00:19:13.076 "state": "configuring", 00:19:13.076 "raid_level": "raid1", 00:19:13.076 "superblock": true, 00:19:13.076 "num_base_bdevs": 2, 00:19:13.076 "num_base_bdevs_discovered": 1, 00:19:13.076 "num_base_bdevs_operational": 2, 00:19:13.076 "base_bdevs_list": [ 00:19:13.076 { 00:19:13.076 "name": "pt1", 00:19:13.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:13.076 "is_configured": true, 00:19:13.076 "data_offset": 256, 00:19:13.076 "data_size": 7936 00:19:13.076 }, 00:19:13.076 { 00:19:13.076 "name": null, 00:19:13.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:13.076 "is_configured": false, 00:19:13.076 "data_offset": 256, 00:19:13.076 "data_size": 7936 00:19:13.076 } 00:19:13.076 ] 00:19:13.076 }' 00:19:13.076 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.076 18:16:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.335 [2024-12-06 18:16:25.358022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:13.335 [2024-12-06 18:16:25.358107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.335 [2024-12-06 18:16:25.358130] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:13.335 [2024-12-06 18:16:25.358141] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.335 [2024-12-06 18:16:25.358323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.335 [2024-12-06 18:16:25.358339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:13.335 [2024-12-06 18:16:25.358392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:13.335 [2024-12-06 18:16:25.358414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:13.335 [2024-12-06 18:16:25.358496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:13.335 [2024-12-06 18:16:25.358506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:13.335 [2024-12-06 18:16:25.358575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:13.335 [2024-12-06 18:16:25.358640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:13.335 [2024-12-06 18:16:25.358648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:13.335 [2024-12-06 18:16:25.358712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.335 pt2 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.335 "name": "raid_bdev1", 00:19:13.335 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:13.335 "strip_size_kb": 0, 00:19:13.335 "state": "online", 00:19:13.335 "raid_level": "raid1", 00:19:13.335 "superblock": true, 00:19:13.335 "num_base_bdevs": 2, 00:19:13.335 "num_base_bdevs_discovered": 2, 00:19:13.335 "num_base_bdevs_operational": 2, 00:19:13.335 "base_bdevs_list": [ 00:19:13.335 { 00:19:13.335 "name": "pt1", 00:19:13.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:13.335 "is_configured": true, 00:19:13.335 "data_offset": 256, 00:19:13.335 "data_size": 7936 00:19:13.335 }, 00:19:13.335 { 00:19:13.335 "name": "pt2", 00:19:13.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:13.335 "is_configured": true, 00:19:13.335 "data_offset": 256, 00:19:13.335 "data_size": 7936 00:19:13.335 } 00:19:13.335 ] 00:19:13.335 }' 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.335 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:13.903 [2024-12-06 18:16:25.817486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.903 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:13.904 "name": "raid_bdev1", 00:19:13.904 "aliases": [ 00:19:13.904 "59046d82-2d9d-45d5-b835-c857db4d9b6f" 00:19:13.904 ], 00:19:13.904 "product_name": "Raid Volume", 00:19:13.904 "block_size": 4128, 00:19:13.904 "num_blocks": 7936, 00:19:13.904 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:13.904 "md_size": 32, 00:19:13.904 "md_interleave": true, 00:19:13.904 "dif_type": 0, 00:19:13.904 "assigned_rate_limits": { 00:19:13.904 "rw_ios_per_sec": 0, 00:19:13.904 "rw_mbytes_per_sec": 0, 00:19:13.904 "r_mbytes_per_sec": 0, 00:19:13.904 "w_mbytes_per_sec": 0 00:19:13.904 }, 00:19:13.904 "claimed": false, 00:19:13.904 "zoned": false, 00:19:13.904 "supported_io_types": { 00:19:13.904 "read": true, 00:19:13.904 "write": true, 00:19:13.904 "unmap": false, 00:19:13.904 "flush": false, 00:19:13.904 "reset": true, 00:19:13.904 "nvme_admin": false, 00:19:13.904 "nvme_io": false, 00:19:13.904 "nvme_io_md": false, 00:19:13.904 "write_zeroes": true, 00:19:13.904 "zcopy": false, 00:19:13.904 "get_zone_info": false, 00:19:13.904 "zone_management": false, 00:19:13.904 "zone_append": false, 00:19:13.904 "compare": false, 00:19:13.904 "compare_and_write": false, 00:19:13.904 "abort": false, 00:19:13.904 "seek_hole": false, 00:19:13.904 "seek_data": false, 00:19:13.904 "copy": false, 00:19:13.904 "nvme_iov_md": false 00:19:13.904 }, 00:19:13.904 "memory_domains": [ 00:19:13.904 { 00:19:13.904 "dma_device_id": "system", 00:19:13.904 "dma_device_type": 1 00:19:13.904 }, 00:19:13.904 { 00:19:13.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.904 "dma_device_type": 2 00:19:13.904 }, 00:19:13.904 { 00:19:13.904 "dma_device_id": "system", 00:19:13.904 "dma_device_type": 1 00:19:13.904 }, 00:19:13.904 { 00:19:13.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.904 "dma_device_type": 2 00:19:13.904 } 00:19:13.904 ], 00:19:13.904 "driver_specific": { 00:19:13.904 "raid": { 00:19:13.904 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:13.904 "strip_size_kb": 0, 00:19:13.904 "state": "online", 00:19:13.904 "raid_level": "raid1", 00:19:13.904 "superblock": true, 00:19:13.904 "num_base_bdevs": 2, 00:19:13.904 "num_base_bdevs_discovered": 2, 00:19:13.904 "num_base_bdevs_operational": 2, 00:19:13.904 "base_bdevs_list": [ 00:19:13.904 { 00:19:13.904 "name": "pt1", 00:19:13.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:13.904 "is_configured": true, 00:19:13.904 "data_offset": 256, 00:19:13.904 "data_size": 7936 00:19:13.904 }, 00:19:13.904 { 00:19:13.904 "name": "pt2", 00:19:13.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:13.904 "is_configured": true, 00:19:13.904 "data_offset": 256, 00:19:13.904 "data_size": 7936 00:19:13.904 } 00:19:13.904 ] 00:19:13.904 } 00:19:13.904 } 00:19:13.904 }' 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:13.904 pt2' 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.904 18:16:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.904 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:13.904 [2024-12-06 18:16:26.065101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 59046d82-2d9d-45d5-b835-c857db4d9b6f '!=' 59046d82-2d9d-45d5-b835-c857db4d9b6f ']' 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.201 [2024-12-06 18:16:26.108779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.201 "name": "raid_bdev1", 00:19:14.201 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:14.201 "strip_size_kb": 0, 00:19:14.201 "state": "online", 00:19:14.201 "raid_level": "raid1", 00:19:14.201 "superblock": true, 00:19:14.201 "num_base_bdevs": 2, 00:19:14.201 "num_base_bdevs_discovered": 1, 00:19:14.201 "num_base_bdevs_operational": 1, 00:19:14.201 "base_bdevs_list": [ 00:19:14.201 { 00:19:14.201 "name": null, 00:19:14.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.201 "is_configured": false, 00:19:14.201 "data_offset": 0, 00:19:14.201 "data_size": 7936 00:19:14.201 }, 00:19:14.201 { 00:19:14.201 "name": "pt2", 00:19:14.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:14.201 "is_configured": true, 00:19:14.201 "data_offset": 256, 00:19:14.201 "data_size": 7936 00:19:14.201 } 00:19:14.201 ] 00:19:14.201 }' 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.201 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.460 [2024-12-06 18:16:26.536000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.460 [2024-12-06 18:16:26.536030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.460 [2024-12-06 18:16:26.536126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.460 [2024-12-06 18:16:26.536179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.460 [2024-12-06 18:16:26.536191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:14.460 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.461 [2024-12-06 18:16:26.599874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:14.461 [2024-12-06 18:16:26.599964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.461 [2024-12-06 18:16:26.600014] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:14.461 [2024-12-06 18:16:26.600047] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.461 [2024-12-06 18:16:26.602037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.461 [2024-12-06 18:16:26.602117] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:14.461 [2024-12-06 18:16:26.602193] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:14.461 [2024-12-06 18:16:26.602258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:14.461 [2024-12-06 18:16:26.602343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:14.461 [2024-12-06 18:16:26.602397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:14.461 [2024-12-06 18:16:26.602504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:14.461 [2024-12-06 18:16:26.602609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:14.461 [2024-12-06 18:16:26.602648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:14.461 [2024-12-06 18:16:26.602746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.461 pt2 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.461 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.720 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.720 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.720 "name": "raid_bdev1", 00:19:14.720 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:14.720 "strip_size_kb": 0, 00:19:14.720 "state": "online", 00:19:14.720 "raid_level": "raid1", 00:19:14.720 "superblock": true, 00:19:14.720 "num_base_bdevs": 2, 00:19:14.720 "num_base_bdevs_discovered": 1, 00:19:14.720 "num_base_bdevs_operational": 1, 00:19:14.720 "base_bdevs_list": [ 00:19:14.720 { 00:19:14.720 "name": null, 00:19:14.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.720 "is_configured": false, 00:19:14.720 "data_offset": 256, 00:19:14.720 "data_size": 7936 00:19:14.720 }, 00:19:14.720 { 00:19:14.720 "name": "pt2", 00:19:14.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:14.720 "is_configured": true, 00:19:14.720 "data_offset": 256, 00:19:14.720 "data_size": 7936 00:19:14.720 } 00:19:14.720 ] 00:19:14.720 }' 00:19:14.720 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.720 18:16:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.979 [2024-12-06 18:16:27.035312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.979 [2024-12-06 18:16:27.035341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.979 [2024-12-06 18:16:27.035413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.979 [2024-12-06 18:16:27.035467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.979 [2024-12-06 18:16:27.035475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.979 [2024-12-06 18:16:27.099207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:14.979 [2024-12-06 18:16:27.099304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.979 [2024-12-06 18:16:27.099341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:14.979 [2024-12-06 18:16:27.099369] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.979 [2024-12-06 18:16:27.101335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.979 [2024-12-06 18:16:27.101402] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:14.979 [2024-12-06 18:16:27.101476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:14.979 [2024-12-06 18:16:27.101540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:14.979 [2024-12-06 18:16:27.101674] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:14.979 [2024-12-06 18:16:27.101721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.979 [2024-12-06 18:16:27.101754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:14.979 [2024-12-06 18:16:27.101878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:14.979 [2024-12-06 18:16:27.101958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:14.979 [2024-12-06 18:16:27.101967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:14.979 [2024-12-06 18:16:27.102039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:14.979 [2024-12-06 18:16:27.102116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:14.979 [2024-12-06 18:16:27.102126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:14.979 [2024-12-06 18:16:27.102191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.979 pt1 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:14.979 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.980 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.239 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.239 "name": "raid_bdev1", 00:19:15.239 "uuid": "59046d82-2d9d-45d5-b835-c857db4d9b6f", 00:19:15.239 "strip_size_kb": 0, 00:19:15.239 "state": "online", 00:19:15.239 "raid_level": "raid1", 00:19:15.239 "superblock": true, 00:19:15.239 "num_base_bdevs": 2, 00:19:15.239 "num_base_bdevs_discovered": 1, 00:19:15.239 "num_base_bdevs_operational": 1, 00:19:15.239 "base_bdevs_list": [ 00:19:15.239 { 00:19:15.239 "name": null, 00:19:15.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.239 "is_configured": false, 00:19:15.239 "data_offset": 256, 00:19:15.239 "data_size": 7936 00:19:15.239 }, 00:19:15.239 { 00:19:15.239 "name": "pt2", 00:19:15.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:15.239 "is_configured": true, 00:19:15.239 "data_offset": 256, 00:19:15.239 "data_size": 7936 00:19:15.239 } 00:19:15.239 ] 00:19:15.239 }' 00:19:15.239 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.239 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.499 [2024-12-06 18:16:27.590613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 59046d82-2d9d-45d5-b835-c857db4d9b6f '!=' 59046d82-2d9d-45d5-b835-c857db4d9b6f ']' 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89301 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89301 ']' 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89301 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89301 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.499 killing process with pid 89301 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89301' 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89301 00:19:15.499 [2024-12-06 18:16:27.661372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.499 [2024-12-06 18:16:27.661469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.499 [2024-12-06 18:16:27.661518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.499 [2024-12-06 18:16:27.661533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:15.499 18:16:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89301 00:19:15.759 [2024-12-06 18:16:27.863884] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.141 18:16:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:17.141 00:19:17.141 real 0m5.972s 00:19:17.141 user 0m9.040s 00:19:17.141 sys 0m1.055s 00:19:17.141 18:16:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.141 ************************************ 00:19:17.141 END TEST raid_superblock_test_md_interleaved 00:19:17.141 ************************************ 00:19:17.141 18:16:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.141 18:16:29 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:17.141 18:16:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:17.141 18:16:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.141 18:16:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.141 ************************************ 00:19:17.141 START TEST raid_rebuild_test_sb_md_interleaved 00:19:17.141 ************************************ 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89624 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89624 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89624 ']' 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.141 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.141 [2024-12-06 18:16:29.124416] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:19:17.141 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:17.141 Zero copy mechanism will not be used. 00:19:17.141 [2024-12-06 18:16:29.124608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89624 ] 00:19:17.141 [2024-12-06 18:16:29.298368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.402 [2024-12-06 18:16:29.404707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.663 [2024-12-06 18:16:29.600277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.663 [2024-12-06 18:16:29.600314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.923 BaseBdev1_malloc 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.923 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.923 [2024-12-06 18:16:29.991433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:17.923 [2024-12-06 18:16:29.991492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.924 [2024-12-06 18:16:29.991533] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:17.924 [2024-12-06 18:16:29.991544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.924 [2024-12-06 18:16:29.993369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.924 [2024-12-06 18:16:29.993407] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:17.924 BaseBdev1 00:19:17.924 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.924 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:17.924 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:17.924 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.924 18:16:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.924 BaseBdev2_malloc 00:19:17.924 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.924 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:17.924 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.924 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.924 [2024-12-06 18:16:30.045818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:17.924 [2024-12-06 18:16:30.045914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.924 [2024-12-06 18:16:30.045954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:17.924 [2024-12-06 18:16:30.045966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.924 [2024-12-06 18:16:30.047753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.924 [2024-12-06 18:16:30.047791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:17.924 BaseBdev2 00:19:17.924 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.924 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:17.924 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.924 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.184 spare_malloc 00:19:18.184 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.184 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:18.184 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.184 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.184 spare_delay 00:19:18.184 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.184 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:18.184 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.184 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.184 [2024-12-06 18:16:30.140939] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.184 [2024-12-06 18:16:30.140997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.184 [2024-12-06 18:16:30.141018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:18.184 [2024-12-06 18:16:30.141029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.184 [2024-12-06 18:16:30.142824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.184 [2024-12-06 18:16:30.142902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.184 spare 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.185 [2024-12-06 18:16:30.152954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.185 [2024-12-06 18:16:30.154724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.185 [2024-12-06 18:16:30.154927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:18.185 [2024-12-06 18:16:30.154942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:18.185 [2024-12-06 18:16:30.155007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:18.185 [2024-12-06 18:16:30.155083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:18.185 [2024-12-06 18:16:30.155092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:18.185 [2024-12-06 18:16:30.155170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.185 "name": "raid_bdev1", 00:19:18.185 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:18.185 "strip_size_kb": 0, 00:19:18.185 "state": "online", 00:19:18.185 "raid_level": "raid1", 00:19:18.185 "superblock": true, 00:19:18.185 "num_base_bdevs": 2, 00:19:18.185 "num_base_bdevs_discovered": 2, 00:19:18.185 "num_base_bdevs_operational": 2, 00:19:18.185 "base_bdevs_list": [ 00:19:18.185 { 00:19:18.185 "name": "BaseBdev1", 00:19:18.185 "uuid": "d7e8d1a4-3d62-540c-96ca-685de98e903e", 00:19:18.185 "is_configured": true, 00:19:18.185 "data_offset": 256, 00:19:18.185 "data_size": 7936 00:19:18.185 }, 00:19:18.185 { 00:19:18.185 "name": "BaseBdev2", 00:19:18.185 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:18.185 "is_configured": true, 00:19:18.185 "data_offset": 256, 00:19:18.185 "data_size": 7936 00:19:18.185 } 00:19:18.185 ] 00:19:18.185 }' 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.185 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.756 [2024-12-06 18:16:30.652457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.756 [2024-12-06 18:16:30.723982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.756 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.757 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.757 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.757 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.757 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.757 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.757 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.757 "name": "raid_bdev1", 00:19:18.757 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:18.757 "strip_size_kb": 0, 00:19:18.757 "state": "online", 00:19:18.757 "raid_level": "raid1", 00:19:18.757 "superblock": true, 00:19:18.757 "num_base_bdevs": 2, 00:19:18.757 "num_base_bdevs_discovered": 1, 00:19:18.757 "num_base_bdevs_operational": 1, 00:19:18.757 "base_bdevs_list": [ 00:19:18.757 { 00:19:18.757 "name": null, 00:19:18.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.757 "is_configured": false, 00:19:18.757 "data_offset": 0, 00:19:18.757 "data_size": 7936 00:19:18.757 }, 00:19:18.757 { 00:19:18.757 "name": "BaseBdev2", 00:19:18.757 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:18.757 "is_configured": true, 00:19:18.757 "data_offset": 256, 00:19:18.757 "data_size": 7936 00:19:18.757 } 00:19:18.757 ] 00:19:18.757 }' 00:19:18.757 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.757 18:16:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.017 18:16:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.017 18:16:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.017 18:16:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.017 [2024-12-06 18:16:31.147363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.017 [2024-12-06 18:16:31.163095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:19.017 18:16:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.017 18:16:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:19.017 [2024-12-06 18:16:31.164999] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.400 "name": "raid_bdev1", 00:19:20.400 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:20.400 "strip_size_kb": 0, 00:19:20.400 "state": "online", 00:19:20.400 "raid_level": "raid1", 00:19:20.400 "superblock": true, 00:19:20.400 "num_base_bdevs": 2, 00:19:20.400 "num_base_bdevs_discovered": 2, 00:19:20.400 "num_base_bdevs_operational": 2, 00:19:20.400 "process": { 00:19:20.400 "type": "rebuild", 00:19:20.400 "target": "spare", 00:19:20.400 "progress": { 00:19:20.400 "blocks": 2560, 00:19:20.400 "percent": 32 00:19:20.400 } 00:19:20.400 }, 00:19:20.400 "base_bdevs_list": [ 00:19:20.400 { 00:19:20.400 "name": "spare", 00:19:20.400 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:20.400 "is_configured": true, 00:19:20.400 "data_offset": 256, 00:19:20.400 "data_size": 7936 00:19:20.400 }, 00:19:20.400 { 00:19:20.400 "name": "BaseBdev2", 00:19:20.400 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:20.400 "is_configured": true, 00:19:20.400 "data_offset": 256, 00:19:20.400 "data_size": 7936 00:19:20.400 } 00:19:20.400 ] 00:19:20.400 }' 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.400 [2024-12-06 18:16:32.329016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.400 [2024-12-06 18:16:32.370058] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.400 [2024-12-06 18:16:32.370144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.400 [2024-12-06 18:16:32.370159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.400 [2024-12-06 18:16:32.370171] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.400 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.400 "name": "raid_bdev1", 00:19:20.400 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:20.400 "strip_size_kb": 0, 00:19:20.400 "state": "online", 00:19:20.400 "raid_level": "raid1", 00:19:20.400 "superblock": true, 00:19:20.400 "num_base_bdevs": 2, 00:19:20.400 "num_base_bdevs_discovered": 1, 00:19:20.400 "num_base_bdevs_operational": 1, 00:19:20.400 "base_bdevs_list": [ 00:19:20.400 { 00:19:20.400 "name": null, 00:19:20.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.400 "is_configured": false, 00:19:20.400 "data_offset": 0, 00:19:20.400 "data_size": 7936 00:19:20.400 }, 00:19:20.400 { 00:19:20.400 "name": "BaseBdev2", 00:19:20.400 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:20.400 "is_configured": true, 00:19:20.400 "data_offset": 256, 00:19:20.400 "data_size": 7936 00:19:20.401 } 00:19:20.401 ] 00:19:20.401 }' 00:19:20.401 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.401 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.969 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.969 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.970 "name": "raid_bdev1", 00:19:20.970 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:20.970 "strip_size_kb": 0, 00:19:20.970 "state": "online", 00:19:20.970 "raid_level": "raid1", 00:19:20.970 "superblock": true, 00:19:20.970 "num_base_bdevs": 2, 00:19:20.970 "num_base_bdevs_discovered": 1, 00:19:20.970 "num_base_bdevs_operational": 1, 00:19:20.970 "base_bdevs_list": [ 00:19:20.970 { 00:19:20.970 "name": null, 00:19:20.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.970 "is_configured": false, 00:19:20.970 "data_offset": 0, 00:19:20.970 "data_size": 7936 00:19:20.970 }, 00:19:20.970 { 00:19:20.970 "name": "BaseBdev2", 00:19:20.970 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:20.970 "is_configured": true, 00:19:20.970 "data_offset": 256, 00:19:20.970 "data_size": 7936 00:19:20.970 } 00:19:20.970 ] 00:19:20.970 }' 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.970 18:16:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.970 [2024-12-06 18:16:32.984434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.970 [2024-12-06 18:16:33.000705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:20.970 18:16:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.970 18:16:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:20.970 [2024-12-06 18:16:33.002505] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.907 "name": "raid_bdev1", 00:19:21.907 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:21.907 "strip_size_kb": 0, 00:19:21.907 "state": "online", 00:19:21.907 "raid_level": "raid1", 00:19:21.907 "superblock": true, 00:19:21.907 "num_base_bdevs": 2, 00:19:21.907 "num_base_bdevs_discovered": 2, 00:19:21.907 "num_base_bdevs_operational": 2, 00:19:21.907 "process": { 00:19:21.907 "type": "rebuild", 00:19:21.907 "target": "spare", 00:19:21.907 "progress": { 00:19:21.907 "blocks": 2560, 00:19:21.907 "percent": 32 00:19:21.907 } 00:19:21.907 }, 00:19:21.907 "base_bdevs_list": [ 00:19:21.907 { 00:19:21.907 "name": "spare", 00:19:21.907 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:21.907 "is_configured": true, 00:19:21.907 "data_offset": 256, 00:19:21.907 "data_size": 7936 00:19:21.907 }, 00:19:21.907 { 00:19:21.907 "name": "BaseBdev2", 00:19:21.907 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:21.907 "is_configured": true, 00:19:21.907 "data_offset": 256, 00:19:21.907 "data_size": 7936 00:19:21.907 } 00:19:21.907 ] 00:19:21.907 }' 00:19:21.907 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:22.166 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=768 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:22.166 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.167 "name": "raid_bdev1", 00:19:22.167 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:22.167 "strip_size_kb": 0, 00:19:22.167 "state": "online", 00:19:22.167 "raid_level": "raid1", 00:19:22.167 "superblock": true, 00:19:22.167 "num_base_bdevs": 2, 00:19:22.167 "num_base_bdevs_discovered": 2, 00:19:22.167 "num_base_bdevs_operational": 2, 00:19:22.167 "process": { 00:19:22.167 "type": "rebuild", 00:19:22.167 "target": "spare", 00:19:22.167 "progress": { 00:19:22.167 "blocks": 2816, 00:19:22.167 "percent": 35 00:19:22.167 } 00:19:22.167 }, 00:19:22.167 "base_bdevs_list": [ 00:19:22.167 { 00:19:22.167 "name": "spare", 00:19:22.167 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:22.167 "is_configured": true, 00:19:22.167 "data_offset": 256, 00:19:22.167 "data_size": 7936 00:19:22.167 }, 00:19:22.167 { 00:19:22.167 "name": "BaseBdev2", 00:19:22.167 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:22.167 "is_configured": true, 00:19:22.167 "data_offset": 256, 00:19:22.167 "data_size": 7936 00:19:22.167 } 00:19:22.167 ] 00:19:22.167 }' 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.167 18:16:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.104 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.363 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.363 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.363 "name": "raid_bdev1", 00:19:23.363 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:23.363 "strip_size_kb": 0, 00:19:23.363 "state": "online", 00:19:23.363 "raid_level": "raid1", 00:19:23.363 "superblock": true, 00:19:23.363 "num_base_bdevs": 2, 00:19:23.363 "num_base_bdevs_discovered": 2, 00:19:23.363 "num_base_bdevs_operational": 2, 00:19:23.363 "process": { 00:19:23.363 "type": "rebuild", 00:19:23.363 "target": "spare", 00:19:23.363 "progress": { 00:19:23.363 "blocks": 5632, 00:19:23.363 "percent": 70 00:19:23.363 } 00:19:23.363 }, 00:19:23.363 "base_bdevs_list": [ 00:19:23.363 { 00:19:23.363 "name": "spare", 00:19:23.363 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:23.363 "is_configured": true, 00:19:23.363 "data_offset": 256, 00:19:23.363 "data_size": 7936 00:19:23.363 }, 00:19:23.363 { 00:19:23.363 "name": "BaseBdev2", 00:19:23.363 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:23.363 "is_configured": true, 00:19:23.363 "data_offset": 256, 00:19:23.363 "data_size": 7936 00:19:23.363 } 00:19:23.363 ] 00:19:23.363 }' 00:19:23.363 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.363 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:23.363 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.363 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.363 18:16:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:24.304 [2024-12-06 18:16:36.115342] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:24.304 [2024-12-06 18:16:36.115416] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:24.304 [2024-12-06 18:16:36.115532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.304 "name": "raid_bdev1", 00:19:24.304 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:24.304 "strip_size_kb": 0, 00:19:24.304 "state": "online", 00:19:24.304 "raid_level": "raid1", 00:19:24.304 "superblock": true, 00:19:24.304 "num_base_bdevs": 2, 00:19:24.304 "num_base_bdevs_discovered": 2, 00:19:24.304 "num_base_bdevs_operational": 2, 00:19:24.304 "base_bdevs_list": [ 00:19:24.304 { 00:19:24.304 "name": "spare", 00:19:24.304 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:24.304 "is_configured": true, 00:19:24.304 "data_offset": 256, 00:19:24.304 "data_size": 7936 00:19:24.304 }, 00:19:24.304 { 00:19:24.304 "name": "BaseBdev2", 00:19:24.304 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:24.304 "is_configured": true, 00:19:24.304 "data_offset": 256, 00:19:24.304 "data_size": 7936 00:19:24.304 } 00:19:24.304 ] 00:19:24.304 }' 00:19:24.304 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.592 "name": "raid_bdev1", 00:19:24.592 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:24.592 "strip_size_kb": 0, 00:19:24.592 "state": "online", 00:19:24.592 "raid_level": "raid1", 00:19:24.592 "superblock": true, 00:19:24.592 "num_base_bdevs": 2, 00:19:24.592 "num_base_bdevs_discovered": 2, 00:19:24.592 "num_base_bdevs_operational": 2, 00:19:24.592 "base_bdevs_list": [ 00:19:24.592 { 00:19:24.592 "name": "spare", 00:19:24.592 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:24.592 "is_configured": true, 00:19:24.592 "data_offset": 256, 00:19:24.592 "data_size": 7936 00:19:24.592 }, 00:19:24.592 { 00:19:24.592 "name": "BaseBdev2", 00:19:24.592 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:24.592 "is_configured": true, 00:19:24.592 "data_offset": 256, 00:19:24.592 "data_size": 7936 00:19:24.592 } 00:19:24.592 ] 00:19:24.592 }' 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.592 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.593 "name": "raid_bdev1", 00:19:24.593 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:24.593 "strip_size_kb": 0, 00:19:24.593 "state": "online", 00:19:24.593 "raid_level": "raid1", 00:19:24.593 "superblock": true, 00:19:24.593 "num_base_bdevs": 2, 00:19:24.593 "num_base_bdevs_discovered": 2, 00:19:24.593 "num_base_bdevs_operational": 2, 00:19:24.593 "base_bdevs_list": [ 00:19:24.593 { 00:19:24.593 "name": "spare", 00:19:24.593 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:24.593 "is_configured": true, 00:19:24.593 "data_offset": 256, 00:19:24.593 "data_size": 7936 00:19:24.593 }, 00:19:24.593 { 00:19:24.593 "name": "BaseBdev2", 00:19:24.593 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:24.593 "is_configured": true, 00:19:24.593 "data_offset": 256, 00:19:24.593 "data_size": 7936 00:19:24.593 } 00:19:24.593 ] 00:19:24.593 }' 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.593 18:16:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.169 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:25.169 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.169 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.169 [2024-12-06 18:16:37.053404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.169 [2024-12-06 18:16:37.053503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.169 [2024-12-06 18:16:37.053623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.169 [2024-12-06 18:16:37.053715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.169 [2024-12-06 18:16:37.053772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:25.169 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.170 [2024-12-06 18:16:37.129231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:25.170 [2024-12-06 18:16:37.129284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.170 [2024-12-06 18:16:37.129307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:25.170 [2024-12-06 18:16:37.129316] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.170 [2024-12-06 18:16:37.131269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.170 [2024-12-06 18:16:37.131304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:25.170 [2024-12-06 18:16:37.131363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:25.170 [2024-12-06 18:16:37.131413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.170 [2024-12-06 18:16:37.131531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.170 spare 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.170 [2024-12-06 18:16:37.231431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:25.170 [2024-12-06 18:16:37.231458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:25.170 [2024-12-06 18:16:37.231559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:25.170 [2024-12-06 18:16:37.231643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:25.170 [2024-12-06 18:16:37.231653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:25.170 [2024-12-06 18:16:37.231732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.170 "name": "raid_bdev1", 00:19:25.170 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:25.170 "strip_size_kb": 0, 00:19:25.170 "state": "online", 00:19:25.170 "raid_level": "raid1", 00:19:25.170 "superblock": true, 00:19:25.170 "num_base_bdevs": 2, 00:19:25.170 "num_base_bdevs_discovered": 2, 00:19:25.170 "num_base_bdevs_operational": 2, 00:19:25.170 "base_bdevs_list": [ 00:19:25.170 { 00:19:25.170 "name": "spare", 00:19:25.170 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:25.170 "is_configured": true, 00:19:25.170 "data_offset": 256, 00:19:25.170 "data_size": 7936 00:19:25.170 }, 00:19:25.170 { 00:19:25.170 "name": "BaseBdev2", 00:19:25.170 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:25.170 "is_configured": true, 00:19:25.170 "data_offset": 256, 00:19:25.170 "data_size": 7936 00:19:25.170 } 00:19:25.170 ] 00:19:25.170 }' 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.170 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.740 "name": "raid_bdev1", 00:19:25.740 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:25.740 "strip_size_kb": 0, 00:19:25.740 "state": "online", 00:19:25.740 "raid_level": "raid1", 00:19:25.740 "superblock": true, 00:19:25.740 "num_base_bdevs": 2, 00:19:25.740 "num_base_bdevs_discovered": 2, 00:19:25.740 "num_base_bdevs_operational": 2, 00:19:25.740 "base_bdevs_list": [ 00:19:25.740 { 00:19:25.740 "name": "spare", 00:19:25.740 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:25.740 "is_configured": true, 00:19:25.740 "data_offset": 256, 00:19:25.740 "data_size": 7936 00:19:25.740 }, 00:19:25.740 { 00:19:25.740 "name": "BaseBdev2", 00:19:25.740 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:25.740 "is_configured": true, 00:19:25.740 "data_offset": 256, 00:19:25.740 "data_size": 7936 00:19:25.740 } 00:19:25.740 ] 00:19:25.740 }' 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.740 [2024-12-06 18:16:37.864109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.740 "name": "raid_bdev1", 00:19:25.740 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:25.740 "strip_size_kb": 0, 00:19:25.740 "state": "online", 00:19:25.740 "raid_level": "raid1", 00:19:25.740 "superblock": true, 00:19:25.740 "num_base_bdevs": 2, 00:19:25.740 "num_base_bdevs_discovered": 1, 00:19:25.740 "num_base_bdevs_operational": 1, 00:19:25.740 "base_bdevs_list": [ 00:19:25.740 { 00:19:25.740 "name": null, 00:19:25.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.740 "is_configured": false, 00:19:25.740 "data_offset": 0, 00:19:25.740 "data_size": 7936 00:19:25.740 }, 00:19:25.740 { 00:19:25.740 "name": "BaseBdev2", 00:19:25.740 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:25.740 "is_configured": true, 00:19:25.740 "data_offset": 256, 00:19:25.740 "data_size": 7936 00:19:25.740 } 00:19:25.740 ] 00:19:25.740 }' 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.740 18:16:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.308 18:16:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:26.308 18:16:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.308 18:16:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.308 [2024-12-06 18:16:38.271431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.308 [2024-12-06 18:16:38.271646] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:26.308 [2024-12-06 18:16:38.271664] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:26.308 [2024-12-06 18:16:38.271705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.308 [2024-12-06 18:16:38.287387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:26.308 18:16:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.308 18:16:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:26.309 [2024-12-06 18:16:38.289281] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.248 "name": "raid_bdev1", 00:19:27.248 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:27.248 "strip_size_kb": 0, 00:19:27.248 "state": "online", 00:19:27.248 "raid_level": "raid1", 00:19:27.248 "superblock": true, 00:19:27.248 "num_base_bdevs": 2, 00:19:27.248 "num_base_bdevs_discovered": 2, 00:19:27.248 "num_base_bdevs_operational": 2, 00:19:27.248 "process": { 00:19:27.248 "type": "rebuild", 00:19:27.248 "target": "spare", 00:19:27.248 "progress": { 00:19:27.248 "blocks": 2560, 00:19:27.248 "percent": 32 00:19:27.248 } 00:19:27.248 }, 00:19:27.248 "base_bdevs_list": [ 00:19:27.248 { 00:19:27.248 "name": "spare", 00:19:27.248 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:27.248 "is_configured": true, 00:19:27.248 "data_offset": 256, 00:19:27.248 "data_size": 7936 00:19:27.248 }, 00:19:27.248 { 00:19:27.248 "name": "BaseBdev2", 00:19:27.248 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:27.248 "is_configured": true, 00:19:27.248 "data_offset": 256, 00:19:27.248 "data_size": 7936 00:19:27.248 } 00:19:27.248 ] 00:19:27.248 }' 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.248 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.508 [2024-12-06 18:16:39.417318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.508 [2024-12-06 18:16:39.494194] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:27.508 [2024-12-06 18:16:39.494317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.508 [2024-12-06 18:16:39.494351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.508 [2024-12-06 18:16:39.494374] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.508 "name": "raid_bdev1", 00:19:27.508 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:27.508 "strip_size_kb": 0, 00:19:27.508 "state": "online", 00:19:27.508 "raid_level": "raid1", 00:19:27.508 "superblock": true, 00:19:27.508 "num_base_bdevs": 2, 00:19:27.508 "num_base_bdevs_discovered": 1, 00:19:27.508 "num_base_bdevs_operational": 1, 00:19:27.508 "base_bdevs_list": [ 00:19:27.508 { 00:19:27.508 "name": null, 00:19:27.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.508 "is_configured": false, 00:19:27.508 "data_offset": 0, 00:19:27.508 "data_size": 7936 00:19:27.508 }, 00:19:27.508 { 00:19:27.508 "name": "BaseBdev2", 00:19:27.508 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:27.508 "is_configured": true, 00:19:27.508 "data_offset": 256, 00:19:27.508 "data_size": 7936 00:19:27.508 } 00:19:27.508 ] 00:19:27.508 }' 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.508 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.768 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:27.769 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.769 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.769 [2024-12-06 18:16:39.907735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:27.769 [2024-12-06 18:16:39.907854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.769 [2024-12-06 18:16:39.907921] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:27.769 [2024-12-06 18:16:39.907968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.769 [2024-12-06 18:16:39.908229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.769 [2024-12-06 18:16:39.908286] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:27.769 [2024-12-06 18:16:39.908382] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:27.769 [2024-12-06 18:16:39.908425] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:27.769 [2024-12-06 18:16:39.908460] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:27.769 [2024-12-06 18:16:39.908488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:27.769 [2024-12-06 18:16:39.924993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:27.769 spare 00:19:27.769 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.769 18:16:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:27.769 [2024-12-06 18:16:39.926886] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.152 "name": "raid_bdev1", 00:19:29.152 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:29.152 "strip_size_kb": 0, 00:19:29.152 "state": "online", 00:19:29.152 "raid_level": "raid1", 00:19:29.152 "superblock": true, 00:19:29.152 "num_base_bdevs": 2, 00:19:29.152 "num_base_bdevs_discovered": 2, 00:19:29.152 "num_base_bdevs_operational": 2, 00:19:29.152 "process": { 00:19:29.152 "type": "rebuild", 00:19:29.152 "target": "spare", 00:19:29.152 "progress": { 00:19:29.152 "blocks": 2560, 00:19:29.152 "percent": 32 00:19:29.152 } 00:19:29.152 }, 00:19:29.152 "base_bdevs_list": [ 00:19:29.152 { 00:19:29.152 "name": "spare", 00:19:29.152 "uuid": "319df2a7-0d5c-51d7-9d61-0513f254c8cc", 00:19:29.152 "is_configured": true, 00:19:29.152 "data_offset": 256, 00:19:29.152 "data_size": 7936 00:19:29.152 }, 00:19:29.152 { 00:19:29.152 "name": "BaseBdev2", 00:19:29.152 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:29.152 "is_configured": true, 00:19:29.152 "data_offset": 256, 00:19:29.152 "data_size": 7936 00:19:29.152 } 00:19:29.152 ] 00:19:29.152 }' 00:19:29.152 18:16:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.152 [2024-12-06 18:16:41.046234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:29.152 [2024-12-06 18:16:41.132338] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:29.152 [2024-12-06 18:16:41.132461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.152 [2024-12-06 18:16:41.132500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:29.152 [2024-12-06 18:16:41.132522] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.152 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.153 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.153 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.153 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.153 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.153 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.153 "name": "raid_bdev1", 00:19:29.153 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:29.153 "strip_size_kb": 0, 00:19:29.153 "state": "online", 00:19:29.153 "raid_level": "raid1", 00:19:29.153 "superblock": true, 00:19:29.153 "num_base_bdevs": 2, 00:19:29.153 "num_base_bdevs_discovered": 1, 00:19:29.153 "num_base_bdevs_operational": 1, 00:19:29.153 "base_bdevs_list": [ 00:19:29.153 { 00:19:29.153 "name": null, 00:19:29.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.153 "is_configured": false, 00:19:29.153 "data_offset": 0, 00:19:29.153 "data_size": 7936 00:19:29.153 }, 00:19:29.153 { 00:19:29.153 "name": "BaseBdev2", 00:19:29.153 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:29.153 "is_configured": true, 00:19:29.153 "data_offset": 256, 00:19:29.153 "data_size": 7936 00:19:29.153 } 00:19:29.153 ] 00:19:29.153 }' 00:19:29.153 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.153 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.722 "name": "raid_bdev1", 00:19:29.722 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:29.722 "strip_size_kb": 0, 00:19:29.722 "state": "online", 00:19:29.722 "raid_level": "raid1", 00:19:29.722 "superblock": true, 00:19:29.722 "num_base_bdevs": 2, 00:19:29.722 "num_base_bdevs_discovered": 1, 00:19:29.722 "num_base_bdevs_operational": 1, 00:19:29.722 "base_bdevs_list": [ 00:19:29.722 { 00:19:29.722 "name": null, 00:19:29.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.722 "is_configured": false, 00:19:29.722 "data_offset": 0, 00:19:29.722 "data_size": 7936 00:19:29.722 }, 00:19:29.722 { 00:19:29.722 "name": "BaseBdev2", 00:19:29.722 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:29.722 "is_configured": true, 00:19:29.722 "data_offset": 256, 00:19:29.722 "data_size": 7936 00:19:29.722 } 00:19:29.722 ] 00:19:29.722 }' 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.722 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.723 [2024-12-06 18:16:41.766210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:29.723 [2024-12-06 18:16:41.766318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.723 [2024-12-06 18:16:41.766363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:29.723 [2024-12-06 18:16:41.766373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.723 [2024-12-06 18:16:41.766566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.723 [2024-12-06 18:16:41.766580] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:29.723 [2024-12-06 18:16:41.766634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:29.723 [2024-12-06 18:16:41.766648] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:29.723 [2024-12-06 18:16:41.766657] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:29.723 [2024-12-06 18:16:41.766668] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:29.723 BaseBdev1 00:19:29.723 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.723 18:16:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.660 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.920 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.920 "name": "raid_bdev1", 00:19:30.920 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:30.920 "strip_size_kb": 0, 00:19:30.920 "state": "online", 00:19:30.920 "raid_level": "raid1", 00:19:30.920 "superblock": true, 00:19:30.920 "num_base_bdevs": 2, 00:19:30.920 "num_base_bdevs_discovered": 1, 00:19:30.920 "num_base_bdevs_operational": 1, 00:19:30.920 "base_bdevs_list": [ 00:19:30.920 { 00:19:30.920 "name": null, 00:19:30.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.920 "is_configured": false, 00:19:30.920 "data_offset": 0, 00:19:30.920 "data_size": 7936 00:19:30.920 }, 00:19:30.920 { 00:19:30.920 "name": "BaseBdev2", 00:19:30.920 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:30.920 "is_configured": true, 00:19:30.920 "data_offset": 256, 00:19:30.920 "data_size": 7936 00:19:30.920 } 00:19:30.920 ] 00:19:30.920 }' 00:19:30.920 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.920 18:16:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.179 "name": "raid_bdev1", 00:19:31.179 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:31.179 "strip_size_kb": 0, 00:19:31.179 "state": "online", 00:19:31.179 "raid_level": "raid1", 00:19:31.179 "superblock": true, 00:19:31.179 "num_base_bdevs": 2, 00:19:31.179 "num_base_bdevs_discovered": 1, 00:19:31.179 "num_base_bdevs_operational": 1, 00:19:31.179 "base_bdevs_list": [ 00:19:31.179 { 00:19:31.179 "name": null, 00:19:31.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.179 "is_configured": false, 00:19:31.179 "data_offset": 0, 00:19:31.179 "data_size": 7936 00:19:31.179 }, 00:19:31.179 { 00:19:31.179 "name": "BaseBdev2", 00:19:31.179 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:31.179 "is_configured": true, 00:19:31.179 "data_offset": 256, 00:19:31.179 "data_size": 7936 00:19:31.179 } 00:19:31.179 ] 00:19:31.179 }' 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.179 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.179 [2024-12-06 18:16:43.335756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.179 [2024-12-06 18:16:43.336014] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:31.179 [2024-12-06 18:16:43.336108] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:31.179 request: 00:19:31.179 { 00:19:31.179 "base_bdev": "BaseBdev1", 00:19:31.179 "raid_bdev": "raid_bdev1", 00:19:31.180 "method": "bdev_raid_add_base_bdev", 00:19:31.180 "req_id": 1 00:19:31.180 } 00:19:31.180 Got JSON-RPC error response 00:19:31.180 response: 00:19:31.180 { 00:19:31.180 "code": -22, 00:19:31.180 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:31.180 } 00:19:31.180 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:31.180 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:31.180 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.180 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.180 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.180 18:16:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:32.563 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:32.563 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.563 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.563 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.563 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.563 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:32.563 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.563 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.563 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.564 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.564 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.564 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.564 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.564 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.564 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.564 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.564 "name": "raid_bdev1", 00:19:32.564 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:32.564 "strip_size_kb": 0, 00:19:32.564 "state": "online", 00:19:32.564 "raid_level": "raid1", 00:19:32.564 "superblock": true, 00:19:32.564 "num_base_bdevs": 2, 00:19:32.564 "num_base_bdevs_discovered": 1, 00:19:32.564 "num_base_bdevs_operational": 1, 00:19:32.564 "base_bdevs_list": [ 00:19:32.564 { 00:19:32.564 "name": null, 00:19:32.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.564 "is_configured": false, 00:19:32.564 "data_offset": 0, 00:19:32.564 "data_size": 7936 00:19:32.564 }, 00:19:32.564 { 00:19:32.564 "name": "BaseBdev2", 00:19:32.564 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:32.564 "is_configured": true, 00:19:32.564 "data_offset": 256, 00:19:32.564 "data_size": 7936 00:19:32.564 } 00:19:32.564 ] 00:19:32.564 }' 00:19:32.564 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.564 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.826 "name": "raid_bdev1", 00:19:32.826 "uuid": "0a01d41e-0698-47bd-aba6-8d8f18a18d3e", 00:19:32.826 "strip_size_kb": 0, 00:19:32.826 "state": "online", 00:19:32.826 "raid_level": "raid1", 00:19:32.826 "superblock": true, 00:19:32.826 "num_base_bdevs": 2, 00:19:32.826 "num_base_bdevs_discovered": 1, 00:19:32.826 "num_base_bdevs_operational": 1, 00:19:32.826 "base_bdevs_list": [ 00:19:32.826 { 00:19:32.826 "name": null, 00:19:32.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.826 "is_configured": false, 00:19:32.826 "data_offset": 0, 00:19:32.826 "data_size": 7936 00:19:32.826 }, 00:19:32.826 { 00:19:32.826 "name": "BaseBdev2", 00:19:32.826 "uuid": "6cea5da8-bdb7-5100-87c2-11eb2f671d6f", 00:19:32.826 "is_configured": true, 00:19:32.826 "data_offset": 256, 00:19:32.826 "data_size": 7936 00:19:32.826 } 00:19:32.826 ] 00:19:32.826 }' 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89624 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89624 ']' 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89624 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89624 00:19:32.826 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.826 killing process with pid 89624 00:19:32.826 Received shutdown signal, test time was about 60.000000 seconds 00:19:32.826 00:19:32.827 Latency(us) 00:19:32.827 [2024-12-06T18:16:44.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.827 [2024-12-06T18:16:44.995Z] =================================================================================================================== 00:19:32.827 [2024-12-06T18:16:44.995Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.827 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.827 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89624' 00:19:32.827 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89624 00:19:32.827 [2024-12-06 18:16:44.918557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:32.827 [2024-12-06 18:16:44.918688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.827 18:16:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89624 00:19:32.827 [2024-12-06 18:16:44.918739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.827 [2024-12-06 18:16:44.918752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:33.083 [2024-12-06 18:16:45.206397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.462 18:16:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:34.462 00:19:34.462 real 0m17.272s 00:19:34.462 user 0m22.525s 00:19:34.462 sys 0m1.539s 00:19:34.462 18:16:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.462 ************************************ 00:19:34.462 END TEST raid_rebuild_test_sb_md_interleaved 00:19:34.462 ************************************ 00:19:34.462 18:16:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.462 18:16:46 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:34.462 18:16:46 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:34.462 18:16:46 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89624 ']' 00:19:34.462 18:16:46 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89624 00:19:34.462 18:16:46 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:34.462 00:19:34.462 real 12m29.818s 00:19:34.462 user 16m55.732s 00:19:34.462 sys 1m53.024s 00:19:34.462 ************************************ 00:19:34.462 END TEST bdev_raid 00:19:34.462 ************************************ 00:19:34.462 18:16:46 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.462 18:16:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.462 18:16:46 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:34.462 18:16:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:34.462 18:16:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.462 18:16:46 -- common/autotest_common.sh@10 -- # set +x 00:19:34.462 ************************************ 00:19:34.462 START TEST spdkcli_raid 00:19:34.462 ************************************ 00:19:34.462 18:16:46 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:34.462 * Looking for test storage... 00:19:34.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:34.462 18:16:46 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:34.462 18:16:46 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:19:34.462 18:16:46 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:34.723 18:16:46 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.723 18:16:46 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:34.724 18:16:46 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:34.724 18:16:46 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.724 18:16:46 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:34.724 18:16:46 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.724 18:16:46 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.724 18:16:46 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.724 18:16:46 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.724 --rc genhtml_branch_coverage=1 00:19:34.724 --rc genhtml_function_coverage=1 00:19:34.724 --rc genhtml_legend=1 00:19:34.724 --rc geninfo_all_blocks=1 00:19:34.724 --rc geninfo_unexecuted_blocks=1 00:19:34.724 00:19:34.724 ' 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.724 --rc genhtml_branch_coverage=1 00:19:34.724 --rc genhtml_function_coverage=1 00:19:34.724 --rc genhtml_legend=1 00:19:34.724 --rc geninfo_all_blocks=1 00:19:34.724 --rc geninfo_unexecuted_blocks=1 00:19:34.724 00:19:34.724 ' 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.724 --rc genhtml_branch_coverage=1 00:19:34.724 --rc genhtml_function_coverage=1 00:19:34.724 --rc genhtml_legend=1 00:19:34.724 --rc geninfo_all_blocks=1 00:19:34.724 --rc geninfo_unexecuted_blocks=1 00:19:34.724 00:19:34.724 ' 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:34.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.724 --rc genhtml_branch_coverage=1 00:19:34.724 --rc genhtml_function_coverage=1 00:19:34.724 --rc genhtml_legend=1 00:19:34.724 --rc geninfo_all_blocks=1 00:19:34.724 --rc geninfo_unexecuted_blocks=1 00:19:34.724 00:19:34.724 ' 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:34.724 18:16:46 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90296 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:34.724 18:16:46 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90296 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90296 ']' 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.724 18:16:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.724 [2024-12-06 18:16:46.797100] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:19:34.724 [2024-12-06 18:16:46.797300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90296 ] 00:19:34.984 [2024-12-06 18:16:46.974693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:34.984 [2024-12-06 18:16:47.092891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.984 [2024-12-06 18:16:47.092932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.924 18:16:47 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.924 18:16:47 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:35.924 18:16:47 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:35.924 18:16:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.924 18:16:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.924 18:16:48 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:35.924 18:16:48 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.924 18:16:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.924 18:16:48 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:35.924 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:35.924 ' 00:19:37.836 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:37.836 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:37.836 18:16:49 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:37.836 18:16:49 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.836 18:16:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.836 18:16:49 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:37.836 18:16:49 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.836 18:16:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.836 18:16:49 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:37.836 ' 00:19:38.776 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:38.776 18:16:50 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:38.776 18:16:50 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:38.776 18:16:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.776 18:16:50 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:38.776 18:16:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.776 18:16:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.776 18:16:50 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:38.776 18:16:50 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:39.345 18:16:51 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:39.605 18:16:51 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:39.605 18:16:51 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:39.605 18:16:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.605 18:16:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.605 18:16:51 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:39.605 18:16:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.605 18:16:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.605 18:16:51 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:39.605 ' 00:19:40.542 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:40.542 18:16:52 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:40.542 18:16:52 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.542 18:16:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.798 18:16:52 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:40.798 18:16:52 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.798 18:16:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.798 18:16:52 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:40.798 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:40.798 ' 00:19:42.170 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:42.170 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:42.171 18:16:54 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:42.171 18:16:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.171 18:16:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.171 18:16:54 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90296 00:19:42.171 18:16:54 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90296 ']' 00:19:42.171 18:16:54 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90296 00:19:42.171 18:16:54 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:42.171 18:16:54 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.171 18:16:54 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90296 00:19:42.429 18:16:54 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.429 18:16:54 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.429 18:16:54 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90296' 00:19:42.429 killing process with pid 90296 00:19:42.429 18:16:54 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90296 00:19:42.429 18:16:54 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90296 00:19:44.964 18:16:56 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:44.964 18:16:56 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90296 ']' 00:19:44.964 18:16:56 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90296 00:19:44.964 18:16:56 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90296 ']' 00:19:44.964 18:16:56 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90296 00:19:44.964 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90296) - No such process 00:19:44.964 18:16:56 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90296 is not found' 00:19:44.964 Process with pid 90296 is not found 00:19:44.964 18:16:56 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:44.964 18:16:56 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:44.964 18:16:56 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:44.964 18:16:56 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:44.964 00:19:44.964 real 0m10.462s 00:19:44.964 user 0m21.612s 00:19:44.964 sys 0m1.137s 00:19:44.964 18:16:56 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.964 18:16:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.964 ************************************ 00:19:44.964 END TEST spdkcli_raid 00:19:44.964 ************************************ 00:19:44.964 18:16:56 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:44.964 18:16:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.964 18:16:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.964 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:19:44.964 ************************************ 00:19:44.964 START TEST blockdev_raid5f 00:19:44.964 ************************************ 00:19:44.964 18:16:56 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:44.964 * Looking for test storage... 00:19:44.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:44.964 18:16:57 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:44.964 18:16:57 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:19:44.964 18:16:57 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:45.224 18:16:57 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.224 18:16:57 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:45.224 18:16:57 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.224 18:16:57 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:45.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.224 --rc genhtml_branch_coverage=1 00:19:45.224 --rc genhtml_function_coverage=1 00:19:45.224 --rc genhtml_legend=1 00:19:45.224 --rc geninfo_all_blocks=1 00:19:45.224 --rc geninfo_unexecuted_blocks=1 00:19:45.224 00:19:45.224 ' 00:19:45.224 18:16:57 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:45.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.224 --rc genhtml_branch_coverage=1 00:19:45.224 --rc genhtml_function_coverage=1 00:19:45.224 --rc genhtml_legend=1 00:19:45.224 --rc geninfo_all_blocks=1 00:19:45.224 --rc geninfo_unexecuted_blocks=1 00:19:45.224 00:19:45.224 ' 00:19:45.224 18:16:57 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:45.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.224 --rc genhtml_branch_coverage=1 00:19:45.224 --rc genhtml_function_coverage=1 00:19:45.224 --rc genhtml_legend=1 00:19:45.224 --rc geninfo_all_blocks=1 00:19:45.224 --rc geninfo_unexecuted_blocks=1 00:19:45.224 00:19:45.224 ' 00:19:45.224 18:16:57 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:45.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.224 --rc genhtml_branch_coverage=1 00:19:45.224 --rc genhtml_function_coverage=1 00:19:45.224 --rc genhtml_legend=1 00:19:45.224 --rc geninfo_all_blocks=1 00:19:45.224 --rc geninfo_unexecuted_blocks=1 00:19:45.224 00:19:45.224 ' 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:45.224 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90580 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:45.225 18:16:57 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90580 00:19:45.225 18:16:57 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90580 ']' 00:19:45.225 18:16:57 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.225 18:16:57 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.225 18:16:57 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.225 18:16:57 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.225 18:16:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.225 [2024-12-06 18:16:57.337800] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:19:45.225 [2024-12-06 18:16:57.337990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90580 ] 00:19:45.484 [2024-12-06 18:16:57.514015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.484 [2024-12-06 18:16:57.647919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.866 Malloc0 00:19:46.866 Malloc1 00:19:46.866 Malloc2 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:46.866 18:16:58 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9d09c4e0-a7b7-4464-830a-592976478697"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9d09c4e0-a7b7-4464-830a-592976478697",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9d09c4e0-a7b7-4464-830a-592976478697",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f9baab26-5197-433b-a4f4-dc3e53d8717a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3c325e8f-ce17-475a-bd2a-0a04c6cfc9e2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d3d62fa6-1e9f-47bb-9bb8-21c81b6303a5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:46.866 18:16:58 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:46.866 18:16:59 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:19:46.866 18:16:59 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:46.866 18:16:59 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90580 00:19:46.866 18:16:59 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90580 ']' 00:19:46.866 18:16:59 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90580 00:19:46.866 18:16:59 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:46.866 18:16:59 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.866 18:16:59 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90580 00:19:47.127 killing process with pid 90580 00:19:47.127 18:16:59 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:47.127 18:16:59 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:47.127 18:16:59 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90580' 00:19:47.127 18:16:59 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90580 00:19:47.127 18:16:59 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90580 00:19:50.496 18:17:01 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:50.496 18:17:01 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:50.496 18:17:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:50.496 18:17:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.496 18:17:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:50.496 ************************************ 00:19:50.496 START TEST bdev_hello_world 00:19:50.496 ************************************ 00:19:50.496 18:17:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:50.496 [2024-12-06 18:17:02.038559] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:19:50.496 [2024-12-06 18:17:02.038774] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90654 ] 00:19:50.496 [2024-12-06 18:17:02.213122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.496 [2024-12-06 18:17:02.350654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.065 [2024-12-06 18:17:02.956896] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:51.065 [2024-12-06 18:17:02.957048] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:51.065 [2024-12-06 18:17:02.957121] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:51.065 [2024-12-06 18:17:02.957635] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:51.065 [2024-12-06 18:17:02.957808] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:51.065 [2024-12-06 18:17:02.957859] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:51.065 [2024-12-06 18:17:02.957927] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:51.065 00:19:51.065 [2024-12-06 18:17:02.957970] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:52.443 00:19:52.443 real 0m2.471s 00:19:52.443 user 0m2.019s 00:19:52.443 sys 0m0.329s 00:19:52.443 18:17:04 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.443 18:17:04 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:52.443 ************************************ 00:19:52.443 END TEST bdev_hello_world 00:19:52.443 ************************************ 00:19:52.443 18:17:04 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:52.443 18:17:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:52.443 18:17:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.443 18:17:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.443 ************************************ 00:19:52.443 START TEST bdev_bounds 00:19:52.443 ************************************ 00:19:52.443 18:17:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:52.443 18:17:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90696 00:19:52.443 18:17:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:52.443 Process bdevio pid: 90696 00:19:52.443 18:17:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:52.443 18:17:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90696' 00:19:52.443 18:17:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90696 00:19:52.443 18:17:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90696 ']' 00:19:52.443 18:17:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.444 18:17:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.444 18:17:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.444 18:17:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.444 18:17:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:52.444 [2024-12-06 18:17:04.574815] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:19:52.444 [2024-12-06 18:17:04.574948] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90696 ] 00:19:52.703 [2024-12-06 18:17:04.750303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:52.962 [2024-12-06 18:17:04.891005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.962 [2024-12-06 18:17:04.891195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.962 [2024-12-06 18:17:04.891206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.531 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.531 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:53.531 18:17:05 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:53.531 I/O targets: 00:19:53.531 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:53.531 00:19:53.531 00:19:53.531 CUnit - A unit testing framework for C - Version 2.1-3 00:19:53.531 http://cunit.sourceforge.net/ 00:19:53.531 00:19:53.531 00:19:53.531 Suite: bdevio tests on: raid5f 00:19:53.531 Test: blockdev write read block ...passed 00:19:53.531 Test: blockdev write zeroes read block ...passed 00:19:53.531 Test: blockdev write zeroes read no split ...passed 00:19:53.790 Test: blockdev write zeroes read split ...passed 00:19:53.790 Test: blockdev write zeroes read split partial ...passed 00:19:53.790 Test: blockdev reset ...passed 00:19:53.790 Test: blockdev write read 8 blocks ...passed 00:19:53.790 Test: blockdev write read size > 128k ...passed 00:19:53.791 Test: blockdev write read invalid size ...passed 00:19:53.791 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:53.791 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:53.791 Test: blockdev write read max offset ...passed 00:19:53.791 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:53.791 Test: blockdev writev readv 8 blocks ...passed 00:19:53.791 Test: blockdev writev readv 30 x 1block ...passed 00:19:53.791 Test: blockdev writev readv block ...passed 00:19:53.791 Test: blockdev writev readv size > 128k ...passed 00:19:53.791 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:53.791 Test: blockdev comparev and writev ...passed 00:19:53.791 Test: blockdev nvme passthru rw ...passed 00:19:53.791 Test: blockdev nvme passthru vendor specific ...passed 00:19:53.791 Test: blockdev nvme admin passthru ...passed 00:19:53.791 Test: blockdev copy ...passed 00:19:53.791 00:19:53.791 Run Summary: Type Total Ran Passed Failed Inactive 00:19:53.791 suites 1 1 n/a 0 0 00:19:53.791 tests 23 23 23 0 0 00:19:53.791 asserts 130 130 130 0 n/a 00:19:53.791 00:19:53.791 Elapsed time = 0.641 seconds 00:19:53.791 0 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90696 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90696 ']' 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90696 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90696 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90696' 00:19:53.791 killing process with pid 90696 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90696 00:19:53.791 18:17:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90696 00:19:55.699 18:17:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:55.699 00:19:55.699 real 0m2.948s 00:19:55.699 user 0m7.244s 00:19:55.699 sys 0m0.475s 00:19:55.699 18:17:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.699 18:17:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:55.699 ************************************ 00:19:55.699 END TEST bdev_bounds 00:19:55.699 ************************************ 00:19:55.699 18:17:07 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:55.699 18:17:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:55.699 18:17:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.699 18:17:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:55.699 ************************************ 00:19:55.699 START TEST bdev_nbd 00:19:55.699 ************************************ 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90756 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90756 /var/tmp/spdk-nbd.sock 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90756 ']' 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.699 18:17:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:55.699 [2024-12-06 18:17:07.602959] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:19:55.699 [2024-12-06 18:17:07.603085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.699 [2024-12-06 18:17:07.777592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.958 [2024-12-06 18:17:07.910456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:56.527 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.787 1+0 records in 00:19:56.787 1+0 records out 00:19:56.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397519 s, 10.3 MB/s 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:56.787 18:17:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:57.045 { 00:19:57.045 "nbd_device": "/dev/nbd0", 00:19:57.045 "bdev_name": "raid5f" 00:19:57.045 } 00:19:57.045 ]' 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:57.045 { 00:19:57.045 "nbd_device": "/dev/nbd0", 00:19:57.045 "bdev_name": "raid5f" 00:19:57.045 } 00:19:57.045 ]' 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.045 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:57.304 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:57.304 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:57.304 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:57.304 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:57.305 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:57.563 /dev/nbd0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:57.563 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.822 1+0 records in 00:19:57.822 1+0 records out 00:19:57.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425732 s, 9.6 MB/s 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:57.822 { 00:19:57.822 "nbd_device": "/dev/nbd0", 00:19:57.822 "bdev_name": "raid5f" 00:19:57.822 } 00:19:57.822 ]' 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:57.822 { 00:19:57.822 "nbd_device": "/dev/nbd0", 00:19:57.822 "bdev_name": "raid5f" 00:19:57.822 } 00:19:57.822 ]' 00:19:57.822 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:58.081 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:58.081 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:58.081 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:58.081 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:58.081 18:17:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:58.081 256+0 records in 00:19:58.081 256+0 records out 00:19:58.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494074 s, 212 MB/s 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:58.081 256+0 records in 00:19:58.081 256+0 records out 00:19:58.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309279 s, 33.9 MB/s 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.081 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:58.340 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:58.600 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:58.600 malloc_lvol_verify 00:19:58.860 18:17:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:58.860 279d5bca-ad4a-4f25-8cb6-a9bd227a1e5f 00:19:58.860 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:59.119 2667d1ae-ff7a-4d44-ac2f-4fa0a8f2325f 00:19:59.119 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:59.379 /dev/nbd0 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:59.379 mke2fs 1.47.0 (5-Feb-2023) 00:19:59.379 Discarding device blocks: 0/4096 done 00:19:59.379 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:59.379 00:19:59.379 Allocating group tables: 0/1 done 00:19:59.379 Writing inode tables: 0/1 done 00:19:59.379 Creating journal (1024 blocks): done 00:19:59.379 Writing superblocks and filesystem accounting information: 0/1 done 00:19:59.379 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.379 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90756 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90756 ']' 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90756 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90756 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.641 killing process with pid 90756 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90756' 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90756 00:19:59.641 18:17:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90756 00:20:01.552 18:17:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:01.552 00:20:01.552 real 0m5.732s 00:20:01.552 user 0m7.630s 00:20:01.552 sys 0m1.306s 00:20:01.552 18:17:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.552 18:17:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 ************************************ 00:20:01.552 END TEST bdev_nbd 00:20:01.552 ************************************ 00:20:01.552 18:17:13 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:01.552 18:17:13 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:20:01.552 18:17:13 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:20:01.552 18:17:13 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:01.552 18:17:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:01.552 18:17:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.552 18:17:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 ************************************ 00:20:01.552 START TEST bdev_fio 00:20:01.552 ************************************ 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:01.552 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:01.552 ************************************ 00:20:01.552 START TEST bdev_fio_rw_verify 00:20:01.552 ************************************ 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.552 18:17:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:01.552 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:01.552 fio-3.35 00:20:01.552 Starting 1 thread 00:20:13.776 00:20:13.776 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90961: Fri Dec 6 18:17:24 2024 00:20:13.776 read: IOPS=11.6k, BW=45.1MiB/s (47.3MB/s)(451MiB/10001msec) 00:20:13.776 slat (nsec): min=17746, max=78472, avg=20731.84, stdev=2689.37 00:20:13.776 clat (usec): min=12, max=416, avg=140.06, stdev=49.64 00:20:13.776 lat (usec): min=31, max=453, avg=160.79, stdev=50.13 00:20:13.776 clat percentiles (usec): 00:20:13.776 | 50.000th=[ 139], 99.000th=[ 241], 99.900th=[ 277], 99.990th=[ 318], 00:20:13.776 | 99.999th=[ 375] 00:20:13.776 write: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(468MiB/9868msec); 0 zone resets 00:20:13.776 slat (usec): min=7, max=294, avg=17.06, stdev= 3.88 00:20:13.776 clat (usec): min=63, max=1148, avg=318.04, stdev=45.04 00:20:13.776 lat (usec): min=79, max=1443, avg=335.10, stdev=46.25 00:20:13.776 clat percentiles (usec): 00:20:13.776 | 50.000th=[ 322], 99.000th=[ 433], 99.900th=[ 594], 99.990th=[ 963], 00:20:13.776 | 99.999th=[ 1012] 00:20:13.776 bw ( KiB/s): min=44528, max=50776, per=98.82%, avg=48007.00, stdev=1995.30, samples=19 00:20:13.776 iops : min=11132, max=12694, avg=12001.74, stdev=498.82, samples=19 00:20:13.776 lat (usec) : 20=0.01%, 50=0.01%, 100=12.50%, 250=39.33%, 500=48.08% 00:20:13.776 lat (usec) : 750=0.07%, 1000=0.02% 00:20:13.776 lat (msec) : 2=0.01% 00:20:13.776 cpu : usr=98.83%, sys=0.49%, ctx=25, majf=0, minf=9563 00:20:13.776 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.776 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.776 issued rwts: total=115531,119847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.776 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:13.776 00:20:13.776 Run status group 0 (all jobs): 00:20:13.776 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=451MiB (473MB), run=10001-10001msec 00:20:13.776 WRITE: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=468MiB (491MB), run=9868-9868msec 00:20:14.345 ----------------------------------------------------- 00:20:14.345 Suppressions used: 00:20:14.345 count bytes template 00:20:14.345 1 7 /usr/src/fio/parse.c 00:20:14.345 721 69216 /usr/src/fio/iolog.c 00:20:14.345 1 8 libtcmalloc_minimal.so 00:20:14.345 1 904 libcrypto.so 00:20:14.345 ----------------------------------------------------- 00:20:14.345 00:20:14.345 00:20:14.345 real 0m12.920s 00:20:14.345 user 0m13.076s 00:20:14.345 sys 0m0.725s 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.345 ************************************ 00:20:14.345 END TEST bdev_fio_rw_verify 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:14.345 ************************************ 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:14.345 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9d09c4e0-a7b7-4464-830a-592976478697"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9d09c4e0-a7b7-4464-830a-592976478697",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9d09c4e0-a7b7-4464-830a-592976478697",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f9baab26-5197-433b-a4f4-dc3e53d8717a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3c325e8f-ce17-475a-bd2a-0a04c6cfc9e2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d3d62fa6-1e9f-47bb-9bb8-21c81b6303a5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:14.346 /home/vagrant/spdk_repo/spdk 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:14.346 00:20:14.346 real 0m13.205s 00:20:14.346 user 0m13.193s 00:20:14.346 sys 0m0.862s 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.346 18:17:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:14.346 ************************************ 00:20:14.346 END TEST bdev_fio 00:20:14.346 ************************************ 00:20:14.605 18:17:26 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:14.605 18:17:26 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:14.605 18:17:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:14.605 18:17:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.605 18:17:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:14.605 ************************************ 00:20:14.605 START TEST bdev_verify 00:20:14.605 ************************************ 00:20:14.605 18:17:26 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:14.605 [2024-12-06 18:17:26.657480] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:20:14.605 [2024-12-06 18:17:26.657578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91126 ] 00:20:14.864 [2024-12-06 18:17:26.831656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:14.864 [2024-12-06 18:17:26.962233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.864 [2024-12-06 18:17:26.962273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.432 Running I/O for 5 seconds... 00:20:17.750 10271.00 IOPS, 40.12 MiB/s [2024-12-06T18:17:30.857Z] 10333.50 IOPS, 40.37 MiB/s [2024-12-06T18:17:31.796Z] 10357.00 IOPS, 40.46 MiB/s [2024-12-06T18:17:32.738Z] 10391.00 IOPS, 40.59 MiB/s [2024-12-06T18:17:32.738Z] 10372.60 IOPS, 40.52 MiB/s 00:20:20.570 Latency(us) 00:20:20.570 [2024-12-06T18:17:32.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.570 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:20.570 Verification LBA range: start 0x0 length 0x2000 00:20:20.570 raid5f : 5.02 6305.62 24.63 0.00 0.00 30606.06 102.40 21635.47 00:20:20.570 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:20.570 Verification LBA range: start 0x2000 length 0x2000 00:20:20.570 raid5f : 5.01 4059.81 15.86 0.00 0.00 47452.21 1946.05 33426.22 00:20:20.570 [2024-12-06T18:17:32.738Z] =================================================================================================================== 00:20:20.570 [2024-12-06T18:17:32.738Z] Total : 10365.43 40.49 0.00 0.00 37199.96 102.40 33426.22 00:20:21.952 00:20:21.952 real 0m7.480s 00:20:21.952 user 0m13.775s 00:20:21.952 sys 0m0.347s 00:20:21.952 18:17:34 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.953 18:17:34 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:21.953 ************************************ 00:20:21.953 END TEST bdev_verify 00:20:21.953 ************************************ 00:20:21.953 18:17:34 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:21.953 18:17:34 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:21.953 18:17:34 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.953 18:17:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.212 ************************************ 00:20:22.212 START TEST bdev_verify_big_io 00:20:22.213 ************************************ 00:20:22.213 18:17:34 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:22.213 [2024-12-06 18:17:34.207025] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:20:22.213 [2024-12-06 18:17:34.207131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91219 ] 00:20:22.473 [2024-12-06 18:17:34.381867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:22.473 [2024-12-06 18:17:34.519094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.473 [2024-12-06 18:17:34.519151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.079 Running I/O for 5 seconds... 00:20:25.401 633.00 IOPS, 39.56 MiB/s [2024-12-06T18:17:38.509Z] 728.50 IOPS, 45.53 MiB/s [2024-12-06T18:17:39.448Z] 739.67 IOPS, 46.23 MiB/s [2024-12-06T18:17:40.390Z] 729.75 IOPS, 45.61 MiB/s [2024-12-06T18:17:40.651Z] 761.20 IOPS, 47.58 MiB/s 00:20:28.483 Latency(us) 00:20:28.483 [2024-12-06T18:17:40.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.483 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:28.483 Verification LBA range: start 0x0 length 0x200 00:20:28.483 raid5f : 5.31 429.74 26.86 0.00 0.00 7467059.62 181.55 326020.14 00:20:28.483 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:28.483 Verification LBA range: start 0x200 length 0x200 00:20:28.483 raid5f : 5.37 330.99 20.69 0.00 0.00 9614199.25 139.51 424925.12 00:20:28.483 [2024-12-06T18:17:40.651Z] =================================================================================================================== 00:20:28.483 [2024-12-06T18:17:40.651Z] Total : 760.73 47.55 0.00 0.00 8406300.99 139.51 424925.12 00:20:29.863 00:20:29.863 real 0m7.737s 00:20:29.863 user 0m14.299s 00:20:29.863 sys 0m0.345s 00:20:29.863 18:17:41 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.863 18:17:41 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:29.863 ************************************ 00:20:29.863 END TEST bdev_verify_big_io 00:20:29.863 ************************************ 00:20:29.863 18:17:41 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:29.863 18:17:41 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:29.863 18:17:41 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.863 18:17:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:29.863 ************************************ 00:20:29.863 START TEST bdev_write_zeroes 00:20:29.863 ************************************ 00:20:29.863 18:17:41 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:29.863 [2024-12-06 18:17:42.005640] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:20:29.863 [2024-12-06 18:17:42.005737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91324 ] 00:20:30.123 [2024-12-06 18:17:42.177986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.123 [2024-12-06 18:17:42.284001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.693 Running I/O for 1 seconds... 00:20:32.074 24831.00 IOPS, 97.00 MiB/s 00:20:32.074 Latency(us) 00:20:32.074 [2024-12-06T18:17:44.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.074 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:32.074 raid5f : 1.01 24809.42 96.91 0.00 0.00 5142.93 1488.15 17972.32 00:20:32.074 [2024-12-06T18:17:44.242Z] =================================================================================================================== 00:20:32.074 [2024-12-06T18:17:44.242Z] Total : 24809.42 96.91 0.00 0.00 5142.93 1488.15 17972.32 00:20:33.012 00:20:33.012 real 0m3.234s 00:20:33.012 user 0m2.879s 00:20:33.012 sys 0m0.226s 00:20:33.012 18:17:45 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.012 18:17:45 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:33.012 ************************************ 00:20:33.012 END TEST bdev_write_zeroes 00:20:33.012 ************************************ 00:20:33.271 18:17:45 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:33.271 18:17:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:33.271 18:17:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.271 18:17:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:33.271 ************************************ 00:20:33.271 START TEST bdev_json_nonenclosed 00:20:33.271 ************************************ 00:20:33.271 18:17:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:33.271 [2024-12-06 18:17:45.310194] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:20:33.271 [2024-12-06 18:17:45.310294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91377 ] 00:20:33.530 [2024-12-06 18:17:45.483253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.530 [2024-12-06 18:17:45.592271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.530 [2024-12-06 18:17:45.592362] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:33.530 [2024-12-06 18:17:45.592392] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:33.530 [2024-12-06 18:17:45.592402] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:33.790 00:20:33.790 real 0m0.614s 00:20:33.790 user 0m0.390s 00:20:33.790 sys 0m0.120s 00:20:33.790 18:17:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.790 18:17:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:33.790 ************************************ 00:20:33.790 END TEST bdev_json_nonenclosed 00:20:33.790 ************************************ 00:20:33.790 18:17:45 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:33.790 18:17:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:33.790 18:17:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.791 18:17:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:33.791 ************************************ 00:20:33.791 START TEST bdev_json_nonarray 00:20:33.791 ************************************ 00:20:33.791 18:17:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:34.051 [2024-12-06 18:17:45.992655] Starting SPDK v25.01-pre git sha1 0ea9ac02f / DPDK 24.03.0 initialization... 00:20:34.051 [2024-12-06 18:17:45.992755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91401 ] 00:20:34.051 [2024-12-06 18:17:46.162218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.310 [2024-12-06 18:17:46.270024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.310 [2024-12-06 18:17:46.270131] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:34.310 [2024-12-06 18:17:46.270150] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:34.310 [2024-12-06 18:17:46.270169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:34.569 00:20:34.569 real 0m0.602s 00:20:34.569 user 0m0.376s 00:20:34.569 sys 0m0.121s 00:20:34.569 18:17:46 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.569 18:17:46 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:34.569 ************************************ 00:20:34.569 END TEST bdev_json_nonarray 00:20:34.569 ************************************ 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:34.570 18:17:46 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:34.570 00:20:34.570 real 0m49.599s 00:20:34.570 user 1m6.666s 00:20:34.570 sys 0m5.374s 00:20:34.570 18:17:46 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.570 18:17:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:34.570 ************************************ 00:20:34.570 END TEST blockdev_raid5f 00:20:34.570 ************************************ 00:20:34.570 18:17:46 -- spdk/autotest.sh@194 -- # uname -s 00:20:34.570 18:17:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:34.570 18:17:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:34.570 18:17:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:34.570 18:17:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:34.570 18:17:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.570 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:20:34.570 18:17:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:34.570 18:17:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:34.570 18:17:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:34.570 18:17:46 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:34.570 18:17:46 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:34.570 18:17:46 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:34.570 18:17:46 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:34.570 18:17:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.570 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:20:34.570 18:17:46 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:34.570 18:17:46 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:34.570 18:17:46 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:34.570 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:20:37.110 INFO: APP EXITING 00:20:37.110 INFO: killing all VMs 00:20:37.110 INFO: killing vhost app 00:20:37.110 INFO: EXIT DONE 00:20:37.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.371 Waiting for block devices as requested 00:20:37.371 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:37.371 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.312 Cleaning 00:20:38.312 Removing: /var/run/dpdk/spdk0/config 00:20:38.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:38.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:38.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:38.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:38.312 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:38.312 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:38.312 Removing: /dev/shm/spdk_tgt_trace.pid57113 00:20:38.312 Removing: /var/run/dpdk/spdk0 00:20:38.312 Removing: /var/run/dpdk/spdk_pid56851 00:20:38.312 Removing: /var/run/dpdk/spdk_pid57113 00:20:38.312 Removing: /var/run/dpdk/spdk_pid57343 00:20:38.312 Removing: /var/run/dpdk/spdk_pid57458 00:20:38.312 Removing: /var/run/dpdk/spdk_pid57514 00:20:38.312 Removing: /var/run/dpdk/spdk_pid57648 00:20:38.312 Removing: /var/run/dpdk/spdk_pid57671 00:20:38.312 Removing: /var/run/dpdk/spdk_pid57882 00:20:38.312 Removing: /var/run/dpdk/spdk_pid58005 00:20:38.312 Removing: /var/run/dpdk/spdk_pid58117 00:20:38.572 Removing: /var/run/dpdk/spdk_pid58245 00:20:38.572 Removing: /var/run/dpdk/spdk_pid58364 00:20:38.572 Removing: /var/run/dpdk/spdk_pid58409 00:20:38.572 Removing: /var/run/dpdk/spdk_pid58440 00:20:38.572 Removing: /var/run/dpdk/spdk_pid58516 00:20:38.572 Removing: /var/run/dpdk/spdk_pid58633 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59125 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59200 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59280 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59302 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59461 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59483 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59642 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59658 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59733 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59751 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59826 00:20:38.572 Removing: /var/run/dpdk/spdk_pid59844 00:20:38.572 Removing: /var/run/dpdk/spdk_pid60056 00:20:38.572 Removing: /var/run/dpdk/spdk_pid60092 00:20:38.572 Removing: /var/run/dpdk/spdk_pid60176 00:20:38.572 Removing: /var/run/dpdk/spdk_pid61582 00:20:38.572 Removing: /var/run/dpdk/spdk_pid61799 00:20:38.572 Removing: /var/run/dpdk/spdk_pid61945 00:20:38.572 Removing: /var/run/dpdk/spdk_pid62599 00:20:38.572 Removing: /var/run/dpdk/spdk_pid62816 00:20:38.572 Removing: /var/run/dpdk/spdk_pid62956 00:20:38.572 Removing: /var/run/dpdk/spdk_pid63605 00:20:38.572 Removing: /var/run/dpdk/spdk_pid63942 00:20:38.572 Removing: /var/run/dpdk/spdk_pid64087 00:20:38.572 Removing: /var/run/dpdk/spdk_pid65489 00:20:38.572 Removing: /var/run/dpdk/spdk_pid65748 00:20:38.572 Removing: /var/run/dpdk/spdk_pid65892 00:20:38.572 Removing: /var/run/dpdk/spdk_pid67290 00:20:38.572 Removing: /var/run/dpdk/spdk_pid67543 00:20:38.572 Removing: /var/run/dpdk/spdk_pid67688 00:20:38.572 Removing: /var/run/dpdk/spdk_pid69090 00:20:38.572 Removing: /var/run/dpdk/spdk_pid69537 00:20:38.572 Removing: /var/run/dpdk/spdk_pid69688 00:20:38.572 Removing: /var/run/dpdk/spdk_pid71192 00:20:38.572 Removing: /var/run/dpdk/spdk_pid71464 00:20:38.572 Removing: /var/run/dpdk/spdk_pid71614 00:20:38.572 Removing: /var/run/dpdk/spdk_pid73107 00:20:38.572 Removing: /var/run/dpdk/spdk_pid73372 00:20:38.572 Removing: /var/run/dpdk/spdk_pid73527 00:20:38.572 Removing: /var/run/dpdk/spdk_pid75018 00:20:38.572 Removing: /var/run/dpdk/spdk_pid75511 00:20:38.572 Removing: /var/run/dpdk/spdk_pid75662 00:20:38.572 Removing: /var/run/dpdk/spdk_pid75807 00:20:38.572 Removing: /var/run/dpdk/spdk_pid76235 00:20:38.572 Removing: /var/run/dpdk/spdk_pid76976 00:20:38.572 Removing: /var/run/dpdk/spdk_pid77359 00:20:38.572 Removing: /var/run/dpdk/spdk_pid78048 00:20:38.572 Removing: /var/run/dpdk/spdk_pid78523 00:20:38.572 Removing: /var/run/dpdk/spdk_pid79289 00:20:38.572 Removing: /var/run/dpdk/spdk_pid79698 00:20:38.572 Removing: /var/run/dpdk/spdk_pid81680 00:20:38.572 Removing: /var/run/dpdk/spdk_pid82132 00:20:38.832 Removing: /var/run/dpdk/spdk_pid82581 00:20:38.832 Removing: /var/run/dpdk/spdk_pid84690 00:20:38.832 Removing: /var/run/dpdk/spdk_pid85171 00:20:38.832 Removing: /var/run/dpdk/spdk_pid85693 00:20:38.832 Removing: /var/run/dpdk/spdk_pid86762 00:20:38.832 Removing: /var/run/dpdk/spdk_pid87091 00:20:38.832 Removing: /var/run/dpdk/spdk_pid88035 00:20:38.832 Removing: /var/run/dpdk/spdk_pid88360 00:20:38.832 Removing: /var/run/dpdk/spdk_pid89301 00:20:38.832 Removing: /var/run/dpdk/spdk_pid89624 00:20:38.832 Removing: /var/run/dpdk/spdk_pid90296 00:20:38.832 Removing: /var/run/dpdk/spdk_pid90580 00:20:38.832 Removing: /var/run/dpdk/spdk_pid90654 00:20:38.832 Removing: /var/run/dpdk/spdk_pid90696 00:20:38.832 Removing: /var/run/dpdk/spdk_pid90946 00:20:38.832 Removing: /var/run/dpdk/spdk_pid91126 00:20:38.832 Removing: /var/run/dpdk/spdk_pid91219 00:20:38.832 Removing: /var/run/dpdk/spdk_pid91324 00:20:38.832 Removing: /var/run/dpdk/spdk_pid91377 00:20:38.832 Removing: /var/run/dpdk/spdk_pid91401 00:20:38.832 Clean 00:20:38.832 18:17:50 -- common/autotest_common.sh@1453 -- # return 0 00:20:38.832 18:17:50 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:38.832 18:17:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.832 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:20:38.832 18:17:50 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:38.832 18:17:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.832 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:20:38.832 18:17:50 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:39.092 18:17:51 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:39.092 18:17:51 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:39.092 18:17:51 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:39.092 18:17:51 -- spdk/autotest.sh@398 -- # hostname 00:20:39.092 18:17:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:39.092 geninfo: WARNING: invalid characters removed from testname! 00:21:01.058 18:18:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:04.355 18:18:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:05.737 18:18:17 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:07.646 18:18:19 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:10.178 18:18:21 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:12.084 18:18:23 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:13.996 18:18:25 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:13.996 18:18:25 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:13.996 18:18:25 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:13.996 18:18:25 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:13.996 18:18:25 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:13.996 18:18:25 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:13.996 + [[ -n 5430 ]] 00:21:13.996 + sudo kill 5430 00:21:14.007 [Pipeline] } 00:21:14.023 [Pipeline] // timeout 00:21:14.028 [Pipeline] } 00:21:14.041 [Pipeline] // stage 00:21:14.046 [Pipeline] } 00:21:14.061 [Pipeline] // catchError 00:21:14.071 [Pipeline] stage 00:21:14.074 [Pipeline] { (Stop VM) 00:21:14.087 [Pipeline] sh 00:21:14.371 + vagrant halt 00:21:16.913 ==> default: Halting domain... 00:21:25.059 [Pipeline] sh 00:21:25.338 + vagrant destroy -f 00:21:27.875 ==> default: Removing domain... 00:21:27.887 [Pipeline] sh 00:21:28.170 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:28.181 [Pipeline] } 00:21:28.196 [Pipeline] // stage 00:21:28.202 [Pipeline] } 00:21:28.216 [Pipeline] // dir 00:21:28.222 [Pipeline] } 00:21:28.237 [Pipeline] // wrap 00:21:28.243 [Pipeline] } 00:21:28.257 [Pipeline] // catchError 00:21:28.267 [Pipeline] stage 00:21:28.269 [Pipeline] { (Epilogue) 00:21:28.283 [Pipeline] sh 00:21:28.569 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:33.866 [Pipeline] catchError 00:21:33.868 [Pipeline] { 00:21:33.882 [Pipeline] sh 00:21:34.169 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:34.169 Artifacts sizes are good 00:21:34.179 [Pipeline] } 00:21:34.195 [Pipeline] // catchError 00:21:34.209 [Pipeline] archiveArtifacts 00:21:34.217 Archiving artifacts 00:21:34.344 [Pipeline] cleanWs 00:21:34.375 [WS-CLEANUP] Deleting project workspace... 00:21:34.375 [WS-CLEANUP] Deferred wipeout is used... 00:21:34.382 [WS-CLEANUP] done 00:21:34.384 [Pipeline] } 00:21:34.400 [Pipeline] // stage 00:21:34.406 [Pipeline] } 00:21:34.419 [Pipeline] // node 00:21:34.426 [Pipeline] End of Pipeline 00:21:34.464 Finished: SUCCESS